This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Documentation

Whether you would like to use OSRD, understand it or contribute, this is the right place!

1 - Tutorials

Step by step, start to finish guides

Tutorials take you by the hand through a series of steps to complete small projects. Start here if you’re new to OSRD. Also look at the “First steps”.

2 - Explanations

Learn more about key concepts

Explanations discuss key topics and concepts at a fairly high level and provide useful background information and explanation.

2.1 - Containers architecture

How the containers works together and how they are built

There are 3 main containers deployed in a standard OSRD setup:

  • Gateway (includes the frontend): Serves the front end, handles authentication and proxies requests to the backend.
  • Editoast: Acts as the backend that interacts with the front end.
  • Core: Handles computation and business logic, called by Editoast.

Standard deployment

The standard deployment can be represented with the following diagram.

flowchart TD
    gw["gateway"]
    front["front-end static files"]
    gw -- local file --> front
    
    browser --> gw
    gw -- HTTP --> editoast
    editoast -- HTTP --> core

External requests are received by the gateway. If the path asked starts with /api it will be forwarded using HTTP to editoast, otherwise it will serve a file with the asked path. Editoast reach the core using HTTP if required.

The gateway is not only a reverse proxy with the front-end bundle included, it also provides all the authentication mechanisms: using OIDC or tokens.

2.2 - Models

What is modeled in OSRD, and how it is modeled

2.2.1 - Infrastructure example

Explains using an example how infrastructure data is structured

Introduction

This page gives an example of how the data formats are used to describe an infrastructure in OSRD.

For this purpose, let’s take as an example the following toy infrastructure:

Toy infrastructure diagram

This diagram is an overview of the infrastructure with lines and stations only.

This infrastructure is not meant to be realistic, but rather meant to help illustrate OSRD’s data model. This example will be created step by step and explained along the way.

The infrastructure generator

In the OSRD repository is a python library designed to help generate infrastructures in a format understood by OSRD.

The infrastructure discussed in this section can be generated thanks to small_infra.py file. To learn more about the generation scripts, you can check out the related README.

Tracks

Track sections

The first objects we need to define are TrackSections. Most other objects are positioned relative to track sections.

A track section is a section of rail (switches not included). One can chose to divide the tracks of their infrastructure in as many track sections as they like. Here we chose to use the longest track sections possible, which means that between two switches there is always a single track section.

Track sections is what simulated trains roll onto. They are the abstract equivalent to physical rail sections. Track sections are bidirectional.

In this example, we define two tracks for the line between the West and North-East stations. We also have overpassing tracks at the North and Mid-West stations for added realism. Finally, we have three separate tracks in the West station, since it’s a major hub in our imaginary infrastructure.

Track sections diagram

These attributes are required for the track section to be complete:

  • length: the length of the track section in meters.
  • geo: the coordinates in real life (geo is for geographic), in the GeoJSON format.
  • sch: the coordinates in the schematic view (sch for schematic, simplified representation), also in GeoJSON format.
  • cosmetic attributes: line_name, track_name, track_number which are used to indicate the name and labels that were given to the tracks / lines in real life.

For all track sections in our infrastructure, the geo and sch attributes are identical, and very much resemble the given diagram.

For most track sections, their length is proportional to what can be seen in the diagram. To preserve readability, exceptions were made for TA6, TA7, TD0 and TD1 (which are 10km and 25km).

Node

A Node represents a node in the infrastructure. In an OSRD simulation, a train can only move from one section of track to another if they are linked by a node.

Node Types

NodeTypes have two mandatory attributes:

  • ports: A list of port names. A port is an endpoint connected to a track section.
  • groups: A mapping between group names and lists of branch (connection between 2 ports) that characterises the different possible positions of the node type

At any time, all nodes have an active group, and may have an active branch, which always belongs to the active group. During a simulation, changing the active branch inside a group is instantaneous, but changing the active branch across groups (changing the active group) takes configurable time. This is because a node is a physical object, and changing active branch can involve moving parts of it. Groups are designed to represent the different positions that a node can have. Each group contains the branches that can be used in the associated node position.

The duration needed to change group is stored inside the Node, since it can vary depending on the physical implementation of the node.

Our examples currently use five node types. Node types are just like other objects, and can easily be added as needed using extended_switch_type.

1) Link

This one represents the link between two sections of track. It has two ports: A and B.

Link diagram

It is used in the OSRD model to create a link between two track sections. This is not a physical object.

2) The Point Switch

The ubiquitous Y switch, which can be thought of as either two tracks merging, or one track splitting.

This node type has three ports: A, B1 and B2.

Point switch diagram

There are two groups, each with one connection in their list: A_B1, which connects A to B1, and A_B2 which connects A to B2.

Thus, at any given moment (except when the switch moves from one group to another), a train can go from A to B1 or from A to B2 but never to both at the same time. A train cannot go from B1 to B2.

A Point Switch only has two positions:

  • A to B1
  • A to B2

point switch position diagram point switch position diagram

3) The Crossing

This is simply two tracks crossing each other.

This type has four ports: A1, B1, A2 et B2.

Cross Switch Diagram

It has only one group containing two connections: A1 to B1 and A2 to B2. Indeed this kind of switch is passive: it has no moving parts. Despite having a single group, it is still used by the simulation to enforce route reservations.

Here are the two different connections this switch type has:

  • A1 to B1
  • A2 to B2

Cross Switch Diagram positions Cross Switch Diagram positions

4) The Double slip switch

This one is more like two point switches back to back. It has four ports: A1, A2, B1 and B2.

Double cross switch diagram

However, it has four groups, each with one connection. The four groups are represented in the following diagram:

  • A1 to B1
  • A1 to B2
  • A2 to B1
  • A2 to B2

Diagram of double crossing switch positions Diagram of double crossing switch positions

Diagram of double crossing switch positions Diagram of double crossing switch positions

5) The Single slip switch

This one looks more like a cross between a single needle and a crossover. It has four ports: A1, A2, B1 and B2.

Single slip switch diagram

Here are the three connections that can be made by this switch:

  • A1 to B1
  • A1 to B2
  • A2 to B2

Diagram of the positions of the single crossing points Diagram of the positions of the single crossing points Diagram of the positions of the single crossing points

Back to nodes

A Node has three attributes:

  • node_type: the identifier of the NodeType of this node.
  • ports: a mapping from port names to track sections extremities.
  • group_change_delay: the time it takes to change which group of the node is activated.

The port names must match the ports of the node type chosen. The track section endpoints can be start or end, be careful to chose the appropriate ones.

Most of our example’s nodes are regular point switches. The path from North station to South station has two cross switches. Finally, there is a double cross switch right before the main line splits into the North-East and South-East lines.

Track sections and points diagram

It is important to note that these node types are hard-coded into the project code. Only the extended_node_type added by the user will appear in the railjson.

Curves and slopes

Curves and Slopes are instrumental to realistic simulations. These objects are defined as a range between a begin and end offsets of one track section. If a curve / slope spans more than one track section, it has to be added to all of them.

The slope / curve values are constant on their entire range. For varying curves / slopes, one needs to create several objects.

Slope values are measured in meters per kilometers, and the curve values are measured in meters (the radius of the curve).

In the small_infra.py file, we have slopes on the track sections TA6, TA7, TD0 and TD1.

There are curves as well, on the track sections TE0, TE1, TE3 and TF1.

Interlocking

All objects so far contributed to track topology (shape). Topology would be enough for trains to navigate the network, but not enough to do so safely. to ensure safety, two systems collaborate:

  • Interlocking ensures trains are allowed to move forward
  • Signaling is the mean by which interlocking communicates with the train

Detectors

These objects are used to create TVD sections (Track Vacancy Detection section): the track area in between detectors is a TVD section. When a train runs into a detector, the section it is entering becomes occupied. The only function of TVD sections is to locate trains.

In real life, detectors can be axle counters or track circuits for example.

For this mean of location to be efficient, detectors need to be placed regularly along your tracks, not too many because of cost, but not too few, because then TVD sections would be very large and trains would need to be very far apart to be told apart, which reduces capacity.

There often are detectors close to all sides of switches. This way, interlocking is made aware pretty much immediately when a switch is cleared, which is then free to be used again.

In OSRD, detectors are point objects, so all the attributes it needs are its id, and track location (track and offset).

Infra diagram with all detectors

Some notes:

  • Between some points, we added only one detector (and not two), because they were really close together, and it would have made no sense to create a tiny TVDS between the two. This situation happened on track sections (TA3, TA4, TA5, TF0 and TG3).
  • In our infrastructure, there is relatively few track sections which are long enough to require more detectors than just those related to switches. Namely, TA6, TA7, TDO, TD1, TF1, TG1 and TH1. For example TD0, which measures 25km, has in fact 17 detectors in total.

Buffer stops

BufferStops are obstacles designed to prevent trains from sliding off dead ends.

In our infrastructure, there is a buffer stop on each track section which has a loose end. There are therefore 8 buffer stops in total.

Together with detectors, they set the boundaries of TVD sections (see Detectors)

Routes

A Route is an itinerary in the infrastructure. A train path is a sequence of routes. Routes are used to reserve section of path with the interlocking. See the dedicated documentation.

It is represented with the following attributes:

  • entry_point and exit_point: references detectors or buffer stops which mark the beginning and the end of the Route.
  • entry_point_direction : Direction on a track section to start the route from the entry_point.
  • switches_direction : A set of directions to follow when we encounter a switch on our Route, to build this Route from entry_point to exit_point.
  • release_detectors: When a train clears a release detector, resources reserved from the beginning of the route until this detector are released.

Signaling

Thanks to interlocking, trains are located and allowed to move. It’s a good start, but meaningless until trains are made aware of it. This is where Signals come into play: signals react to interlocking, and can be seen by trains.

How trains react to signals depends on the aspect, kind of signal, and signaling system.

Here are the most important attributes for signals:

  • linked_detector: The linked detector.
  • type_code: The type of signal.
  • direction: The direction it protects, which can simply be interpreted as the way in which it can be seen by an incoming train (since there are lights only on one side…). Direction is relative to track section orientation.
  • Cosmetic attributes like angle_geo or side which control the way in which the signals are displayed in the front-end.

Here is a visualization of how one can represent a signal, and which direction it protects.

Signal direction example


The way the signals are arranged is highly dependent on both signaling system and infrastructure manager.

Here are the basic rules used for this example infrastructure:

  • We add two spacing signals (one per direction) for each detector that is cutting a long TVD section into smaller ones.
  • Switch entries where a train might have to stop are protected by a signal (which is located outside of the switch TVD section). It must be visible from the direction used to approach the switch. When there are multiple switches in a row, only the first one usually needs protection, as interlocking is usually designed as not to encourage trains stopping in the middle of intersections.

Note that detectors linked to at least one signal are not represented, as there are not signals without associated detectors in this example.

To get the id of a detector linked to a signal, take the signal’s id and replace S by D (e.g. SA0 -> DA0).

Infra diagram with all signals

Electrification

To allow electric trains to run on our infrastructure, we need to specify which parts of the infrastructure is electrified.

Catenaries

Catenaries are objects that represent the overhead wires that power electric trains. They are represented with the following attributes:

  • voltage: A string representing the type of power supply used for electrification
  • track_ranges: A list of range of track sections (TrackRanges) covered by this catenary. A TrackRange is composed of a track section id, a begin offset and an end offset.

In our example infrastructure, we have two Catenaries:

  • One with voltage set to "1500", which covers only TA0.
  • One with voltage set to "25000", which covers all others except TD1.

This means that only thermal trains can cross the TD1 track section.

Our example also outlines that, unlike its real life counterpart, a single Catenary may cover the whole infrastructure.

Neutral Sections

In some parts of an infrastructure, the train drivers may be instructed - mainly for safety reasons - to cut the power supply to the train.

To represent such parts, we use NeutralSections. They are represented mainly with the following attributes:

  • track_ranges: A list of DirectedTrackRanges (track ranges associated to a direction) which are covered by this neutral section.
  • lower_pantograph: A boolean indicating whether the train’s pantograph should be lowered while in this section.

In our example infrastructure, we have three NeutralSections: one at the junction of the "1500" and "25000" catenaries, one on TA6 and one on TG1 and TG4.

For more details about the model see the dedicated page.

Miscellaneous

Operational points

Operational point is also known in French as “Point Remarquable” (PR). One OperationalPoint is a collection of points (OperationalPointParts) of interest.

For example, it may be convenient (reference point for train operation) to store the location of platforms as parts and group them by station in operational points. In the same way, a bridge over tracks will be one OperationalPoint, but it will have several OperationPointParts, one at the intersection of each track.

In the example infrastructure, we only used operational points to represent stations. Operational point parts are displayed as purple diamonds. Keep in mind a single operational point may contain multiple parts.

Operational points examples

Loading Gauge Limits

These objects are akin to Slopes and Curves: it covers a range of track section, with a begin and an end offset. It represents a restriction on the trains that can travel on the given range, by weight or by train type (freight or passenger).

We did not put any in our examples.

Speed Sections

The SpeedSections represent speed limits (in meters per second) that are applied on some parts of the tracks. One SpeedSection can span on several track sections, and do not necessarily cover the whole track sections. Speed sections can overlap.

In our example infrastructure, we have a speed section covering the whole infrastructure, limiting the speed to 300 km/h. On a smaller part of the infrastructure, we applied more restrictive speed sections.

Speed section examples

2.2.2 - Neutral Sections

Documentation about what they are and how they are implemented

Physical object to model

Introduction

For a train to be able to run, it must either have an energy source on board (fuel, battery, hydrogen, …) or be supplied with energy throughout its journey.

To supply this energy, electrical cables are suspended above the tracks: the catenaries. The train then makes contact with these cables thanks to a conducting piece mounted on a mechanical arm: the pantograph.

Neutral sections

With this system it is difficult to ensure the electrical supply of a train continuously over the entire length of a line. On certain sections of track, it is necessary to cut the electrical supply of the train. These portions are called neutral sections.

Indeed, in order to avoid energy losses along the catenaries, the current is supplied by several substations distributed along the tracks. Two portions of catenaries supplied by different substations must be electrically isolated to avoid short circuits.

Moreover, the way the tracks are electrified (DC or not for example) can change according to the local uses and the time of installation. It is again necessary to electrically isolate the portions of tracks which are electrified differently. The train must also (except in particular cases) change its pantograph when the type of electrification changes.

In both cases, the driver is instructed to cut the train’s traction, and sometimes even to lower the pantograph.
In the French infrastructure, these zones are indicated by announcement, execution and end signs. They also carry the indication to lower the pantograph or not. The portions of track between the execution and end may not be electrified entirely, and may not even have a catenary (in this case the zone necessarily requires lowering the pantograph).
REV (for reversible) signs are sometimes placed downstream of the end of zone signs. They are intended for trains that run with a pantograph at the rear of the train. These signs indicate that the driver can resume traction safely.

Additionally, it may sometimes be impossible on a short section of track to place a catenary or to raise the train’s pantograph. In this case the line is still considered electrified, and the area without electrification (passage under a bridge for example) is considered as a neutral section.

Rolling stock

After passing through a neutral section, a train must resume traction. This is not immediate (a few seconds), and the necessary duration depends on the rolling stock.

In addition, the driver must, if necessary, lower his pantograph, which also takes time (a few tens of seconds) and also depends on the rolling stock.

Thus, the coasting imposed on the train extends outside the neutral section, since these system times are to be counted from the end of the neutral section.

Data model

We have chosen to model the neutral sections as the space between the signs linked to it (and not as the precise zone where there is no catenary or where the catenary is not electrified).

This zone is directional, i.e. associated with a direction of travel, in order to be able to take into account different placements of signs according to the direction. The execution sign of a given direction is not necessarily placed at the same position as the end of zone sign of the opposite direction.

For a two-way track, a neutral section is therefore represented by two objects.

The schema is the following

{
    "lower_pantograph": boolean,
    "track_ranges": [
        {
            "track": string,
            "start": number,
            "end": number,
            "direction": enum
        }
    ],
    "announcement_track_ranges": [
        {
            "track": string,
            "start": number,
            "end": number,
            "direction": enum
        }
    ]
}
  • lower_pantograph: indicates whether the pantograph should be lowered in this section
  • track_ranges: list of track sections ranges where the train must not traction
  • announcement_track_ranges: list of track sections ranges between the announcement sign and the execution sign

Display

Map

The zones displayed in the map correspond to the track_ranges of neutral sections, thus are between the execution and end signs of the zone. The color of the zone indicates whether the train must lower its pantograph in the zone or not.

The direction in which the zone applies is not represented.

Simulation results

In the linear display, it is always the area between EXE and FIN that is displayed.

Pathfinding

Neutral sections are therefore portions of “non-electrified” track where an electric train can still run (but where it cannot traction).

When searching for a path in the infrastructure, an electric train can travel through a track section that is not covered by the track_ranges of a catenary object (documentation to be written) only if it is covered by the track_ranges of a neutral section.

Simulation

In our simulation, we approximate the driver’s behavior as follows:

  • The coasting is started as soon as the train’s head passes the announcement sign
  • The system times (pantograph reading and traction resumption) start as soon as the train’s head passes the end sign.

In the current simulation, it is easier to use spatial integration bounds rather than temporal ones. We make the following approximation: when leaving the neutral section, we multiply the system times by the speed at the exit of the zone. The coasting is then extended over the obtained distance. This approximation is reasonable because the train’s inertia and the almost absence of friction guarantee that the speed varies little over this time interval.

Improvements to be made

Several aspects could be improved:

  • We do not model the REV signs, all trains therefore only have one pantograph at the front in our simulations.
  • System times are approximated.
  • The driver’s behavior is rather restrictive (coasting could start after the announcement sign).
  • The display of the zones is limited: no representation of the direction or the announcement zones.
  • These zones are not editable.

2.3 - Running time calculation

OSRD can be used to perform two types of calculations:

  • Standalone train simulation: calculation of the travel time of a train on a given route without interaction between the train and the signalling system.
  • Simulation: “dynamic” calculation of several trains interacting with each other via the signalling system.

1 - The input data

A running time calculation is based on 5 inputs:

  • Infrastructure: Line and track topology, position of stations and passenger buildings, position and type of points, signals, maximum line speeds, corrected line profile (gradients, ramps and curves).

Infrastructure

The blue histogram is a representation of the gradients in [‰] per position in [m]. The gradients are positive for ramps and negative for slopes.

The orange line represents the cumulative profile, i.e. the relative altitude to the starting point.

The blue line is a representation of turns in terms of radii of curves in [m].

  • The rolling stock: The characteristics of which needed to perform the simulation are shown below.

Rolling Stock Material

The orange curve, called the effort-speed curve, represents the maximum motor effort as a function of the speed of travel.

The length, mass, and maximum speed of the train are shown at the bottom of the box.

  • The departure time is then used to calculate the times of passage at the various points of interest (including stations).

  • Allowances: Time added to the train’s journey to relax its running (see page on allowances).

Allowances

2 - The results

The results of a running time calculation can be represented in different forms:

  • The space/time graph (GET): represents the path of trains in space and time, in the form of generally diagonal lines whose slope is the speed. Stops are shown as horizontal plates.

Space/Time Graph

Example of a GET with several trains spaced about 30 minutes apart.

The x axis is the time of the train, the y axis is the position of the train in [m].

The blue line represents the most tense running calculation for the train, the green line represents a relaxed, so-called “economic” running calculation.

The solid rectangles surrounding the paths represent the portions of the track successively reserved for the train to pass (called blocks).

  • The space/speed graph (SSG): represents the journey of a single train, this time in terms of speed. Stops are therefore shown as a drop in the curve to zero, followed by a re-acceleration.

Space/Speed Graph

The x axis is the train position in [m], the y axis is the train speed in [km/h].

The purple line represents the maximum permitted speed.

The blue line represents the speed in the case of the most stretched running calculation.

The green line represents the speed in the case of the “economic” travel calculation.

  • The timetable for the passage of the train at the various points of interest.

Departure timetables

2.3.1 - Physical modeling

Physical modelling plays an important role in the OSRD core calculation. It allows us to simulate train traffic, and it must be as realistic as possible train traffic, and it must be as realistic as possible.

Force review

To calculate the displacement of the train over time, we must first calculate its speed at each instant. A simple way to obtain this speed is to calculate the acceleration. Thanks to the fundamental principle of dynamics, the acceleration of the train at each instant is directly dependent on the different forces applied to it: $$ \sum \vec{F}=m\vec{a} $$

Running time

  • Traction: The value of the traction force \(F_{mot}\) depends on several factors:

    • the rolling stock
    • the speed of the train, \(v^{\prime}x\) according to the effort-speed curve below:

    $$ {\vec{F_{mot}}(v_{x^{\prime}}, x^{\prime})=F_{mot}(v_{x^{\prime}}, x^{\prime})\vec{e_x^{\prime}}} $$

    Running time

    The x axis represents the speed of the train in [km/h], the y axis the value of the traction force in [kN].

    • the action of the driver, who accelerates more or less strongly depending on where he is on his journey

  • Braking : The value of the braking force \(F_{brk}\) also depends on the rolling stock and the driver’s action but has a constant value for a given rolling stock. In the current state of modelling, braking is either zero or at its maximum value.

$$ \vec{F_{brk}}(x^{\prime})=-F_{brk}(x^{\prime}){\vec{e_{x^{\prime}}}} $$

A second approach to modelling braking is the so-called hourly approach, as it is used for hourly production at SNCF. In this case, the deceleration is fixed and the braking no longer depends on the different forces applied to the train. Typical deceleration values range from 0.4 to 0.7m/s².


  • Forward resistance: To model the forward resistance of the train, the Davis formula is used, which takes into account all the friction and aerodynamic resistance of the air. The value of the drag depends on the speed \(v^{\prime}_x\). The coefficients \(A\), \(B\), et \(C\) depend on the rolling stock.

$$ {\vec{R}(v_{x^{\prime}})}=-(A+Bv_{x^{\prime}}+{Cv_{x^{\prime}}}^2){\vec{e_{x^{\prime}}}} $$


  • Weight (slopes + turns) : The weight force given by the product between the mass \(m\) of the train and the gravitational constant \(g\) is projected on the axes \(\vec{e_x}^{\prime}\) and \(\vec{e_y}^{\prime}\).For projection, we use the angle \(i(x^{\prime})\), which is calculated from the slope angle \(s(x^{\prime})\) corrected by a factor that takes into account the effect of the turning radius \(r(x^{\prime})\).

$$ \vec{P(x^{\prime})}=-mg\vec{e_y}(x^{\prime})= -mg\Big[sin\big(i(x^{\prime})\big){\vec{e_{x^{\prime}}}(x^{\prime})}+cos\big(i(x^{\prime})\big){\vec{e_{{\prime}}}(x^{\prime})}\Big] $$

$$ i(x^{\prime})= s(x^{\prime})+\frac{800m}{r(x^{\prime})} $$


  • Ground Reaction : The ground reaction force simply compensates for the vertical component of the weight, but has no impact on the dynamics of the train as it has no component along the axis \({\vec{e_x}^{\prime}}\).

$$ \vec{R_{gnd}}=R_{gnd}{\vec{e_{y^{\prime}}}} $$

Forces balance

The equation of the fundamental principle of dynamics projected onto the axis \({\vec{e_x}^{\prime}}\) (in the train frame of reference) gives the following scalar equation:

$$ a_{x^{\prime}}(t) = \frac{1}{m}\Big [F_{mot}(v_{x^{\prime}}, x^{\prime})-F_{brk}(x^{\prime})-(A+Bv_{x^{\prime}}+{Cv_{x^{\prime}}}^2)-mgsin(i(x^{\prime}))\Big] $$

This is then simplified by considering that despite the gradient the train moves on a plane and by amalgamating \(\vec{e_x}\) and \(\vec{e_x}^{\prime}\). The gradient still has an impact on the force balance, but it is assumed that the train is only moving horizontally, which gives the following simplified equation:

$$ a_{x}(t) = \frac{1}{m}\Big[F_{mot}(v_{x}, x)-F_{brk}(x)-(A+Bv_{x}+{Cv_{x}}^2)-mgsin(i(x))\Big] $$

Resolution

The driving force and the braking force depend on the driver’s action (he decides to accelerate or brake more or less strongly depending on the situation). This dependence is reflected in the dependence of these two forces on the position of the train. The weight component is also dependent on the position of the train, as it comes directly from the slopes and bends below the train.

In addition, the driving force depends on the speed of the train (according to the speed effort curve) as does the resistance to forward motion. resistance.

These different dependencies make it impossible to solve this equation analytically, and the acceleration of the train at each moment must be calculated by numerical integration.

2.3.2 - Numerical integration

Introduction

Since physical modelling has shown that the acceleration of the train is influenced by various factors that vary along the route (gradient, curvature, engine traction force, etc.), the calculation must be carried out using a numerical integration method. The path is then separated into sufficiently short steps to consider all these factors as constant, which allows this time to use the equation of motion to calculate the displacement and speed of the train.

Euler’s method of numerical integration is the simplest way of doing this, but it has a number of drawbacks. This article explains the Euler method, why it is not suitable for OSRD purposes and which integration method should be used instead.

Euler’s method

The Euler method applied to the integration of the equation of motion of a train is:

$$v(t+dt) = a(v(t), x(t))dt + v(t)$$

$$x(t+dt) = \frac{1}{2}a(v(t), x(t))dt^2 + v(t)dt + x(t)$$

Euler’s method

 

Advantages of Euler’s method

The advantages of the Euler method are that it is very simple to implement and has a rather fast calculation for a given time step, compared to other numerical integration methods (see appendix)

Disadvantages of the Euler’s method

The Euler integration method presents a number of problems for OSRD:

  • It is relatively imprecise, and therefore requires a small time step, which generates a lot of data.
  • With time integration, only the conditions at the starting point of the integration step (gradient, infrastructure parameters, etc.) are known, as one cannot predict precisely where it will end.
  • We cannot anticipate future changes in the directive: the train only reacts by comparing its current state with its set point at the same time. To illustrate, it is as if the driver is unable to see ahead, whereas in reality he anticipates according to the signals, slopes and bends he sees ahead.

Runge-Kutta’s 4 method

The Runge-Kutta 4 method applied to the integration of the equation of motion of a train is:

$$v(t+dt) = v(t) + \frac{1}{6}(k_1 + 2k_2 + 2k_3 + k_4)dt$$

With:

$$k_1 = a(v(t), x(t))$$

$$k_2 = a\Big(v(t+k_1\frac{dt}{2}), x(t) + v(t)\frac{dt}{2} + k_1\frac{dt^2}{8}\Big)$$

$$k_3 = a\Big(v(t+k_2\frac{dt}{2}), x(t) + v(t)\frac{dt}{2} + k_2\frac{dt^2}{8}\Big)$$

$$k_4 = a\Big(v(t+k_3dt), x(t) + v(t)dt + k_3\frac{dt^2}{2}\Big)$$

Runge-Kutta 4’s method

 

Advantages of Runge Kutta’s 4 method

Runge Kutta’s method of integration 4 addresses the various problems raised by Euler’s method:

  • It allows the anticipation of directive changes within a calculation step, thus representing more accurately the reality of driving a train.
  • It is more accurate for the same calculation time (see appendix), allowing for larger integration steps and therefore fewer data points.

Disavantages of Runge Kutta’s 4 method

The only notable drawback of the Runge Kutta 4 method encountered so far is its difficulty of implementation.

The choice of integration method for OSRD

Study of accuracy and speed of calculation

Different integration methods could have replaced the basic Euler integration in the OSRD algorithm. In order to decide which method would be most suitable, a study of the accuracy and computational speed of different methods was carried out. This study compared the following methods:

  • Euler
  • Euler-Cauchy
  • Runge-Kutta 4
  • Adams 2
  • Adams 3

All explanations of these methods can be found (in french) in this document, and the python code used for the simulation is here.

The simulation calculates the position and speed of a high-speed train accelerating on a flat straight line.

Equivalent time step simulations

A reference curve was simulated using the Euler method with a time step of 0.1s, then the same path was simulated using the other methods with a time step of 1s. It is then possible to simply compare each curve to the reference curve, by calculating the absolute value of the difference at each calculated point. The resulting absolute error of the train’s position over its distance travelled is as follows:

precisions_h_equivalent

It is immediately apparent that the Euler method is less accurate than the other four by about an order of magnitude. Each curve has a peak where the accuracy is extremely high (extremely low error), which is explained by the fact that all curves start slightly above the reference curve, cross it at one point and end slightly below it, or vice versa.

As accuracy is not the only important indicator, the calculation time of each method was measured. This is what we get for the same input parameters:

Integration methodCalculation time (s)
Euler1.86
Euler-Cauchy3.80
Runge-Kutta 47.01
Adams 23.43
Adams 35.27

Thus, Euler-Cauchy and Adams 2 are about twice as slow as Euler, Adams 3 is about three times as slow, and RK4 is about four times as slow. These results have been verified on much longer simulations, and the different ratios are maintained.

Simulation with equivalent calculation time

As the computation times of all methods depend linearly on the time step, it is relatively simple to compare the accuracy for approximately the same computation time. Multiplying the time step of Euler-Cauchy and Adams 2 by 2, the time step of Adams 3 by 3, and the time step of RK4 by 4, here are the resulting absolute error curves:

precisions_time_equivalent

And here are the calculation times:

Integration methodCalculation time (s)
Euler1.75
Euler-Cauchy2.10
Runge-Kutta 41.95
Adams 21.91
Adams 31.99

After some time, RK4 tends to be the most accurate method, slightly more accurate than Euler-Cauchy, and still much more accurate than the Euler method.

Conclusions of the study

The study of accuracy and computational speed presented above shows that RK4 and Euler-Cauchy would be good candidates to replace the Euler algorithm in OSRD: both are fast, accurate, and could replace the Euler method without requiring large implementation changes because they only compute within the current time step. It was decided that OSRD would use the Runge-Kutta 4 method because it is slightly more accurate than Euler-Cauchy and it is a well-known method for this type of calculation, so it is very suitable for an open-source simulator.

2.3.3 - Envelopes system

The envelope system is an interface created specifically for the OSRD gait calculation. It allows you to manipulate different space/velocity curves, to slice them, to end them, to interpolate specific points, and to address many other needs necessary for the gait calculation.

A specific interface in the OSRD Core service

The envelope system is part of the core service of OSRD (see software architecture).

Its main components are :

1 - EnvelopePart: space/speed curve, defined as a sequence of points and having metadata indicating for example if it is an acceleration curve, a braking curve, a speed hold curve, etc.

2 - Envelope: a list of end-to-end EnvelopeParts on which it is possible to perform certain operations:

  • check for continuity in space (mandatory) and speed (optional)
  • look for the minimum and/or maximum speed of the envelope
  • cut a part of the envelope between two points in space
  • perform a velocity interpolation at a certain position
  • calculate the elapsed time between two positions in the envelope

envelope_scheme

3 - Overlays : system for adding more constrained (i.e. lower speed) EnvelopeParts to an existing envelope.

Given envelopes vs. calculated envelopes

During the simulation, the train is supposed to follow certain speed instructions. These are modelled in OSRD by envelopes in the form of space/speed curves. Two types can be distinguished:

  • Envelopes from infrastructure and rolling stock data, such as maximum line speed and maximum train speed. Being input data for our calculation, they do not correspond to curves with a physical meaning, as they are not derived from the results of a real integration of the physical equations of motion.
  • The envelopes result from real integration of the physical equations of motion. They correspond to a curve that is physically tenable by the train and also contain time information.

A simple example to illustrate this difference: if we simulate a TER journey on a mountain line, one of the input data will be a maximum speed envelope of 160km/h, corresponding to the maximum speed of our TER. However, this envelope does not correspond to a physical reality, as it is possible that on certain sections the gradient is too steep for the train to be able to maintain this maximum speed of 160km/h. The calculated envelope will therefore show in this example a speed drop in the steepest areas, where the envelope given was perfectly flat.

Simulation of several trains

In the case of the simulation of many trains, the signalling system must ensure safety. The effect of signalling on the running calculation of a train is reproduced by superimposing dynamic envelopes on the static envelope. A new dynamic envelope is introduced for example when a signal closes. The train follows the static economic envelope superimposed on the dynamic envelopes, if any. In this simulation mode, a time check is performed against a theoretical time from the time information of the static economic envelope. If the train is late with respect to the scheduled time, it stops following the economic envelope and tries to go faster. Its space/speed curve will therefore be limited by the maximum effort envelope.

2.3.4 - Pipeline

The walk calculation in OSRD is a 4-step process, each using the envelopes system:

  1. Construction of the most restrictive speed profile
  2. Addition of the different braking curves
  3. Adding the different acceleration curves and checking the constant speed curves
  4. Application of allowance(s)

 

Calculation of the Most Restricted Speed Profile (MRSP)

A first envelope is calculated at the beginning of the simulation by grouping all static velocity limits:

  • maximum line speed
  • maximum speed of rolling stock
  • temporary speed limits (e.g. in case of works on a line)
  • speed limits by train category
  • speed limits according to train load
  • speed limits corresponding to signposts

The length of the train is also taken into account to ensure that the train does not accelerate until its tail leaves the slowest speed zone. An offset is then applied to the red dashed curve. The resulting envelope (black curve) is called the Most Restricted Speed Profile (MRSP). It is on this envelope that the following steps will be calculated.

Most Restricted Speed Profile

The red dotted line represents the maximum permitted speed depending on the position. The black line represents the MRSP where the train length has been taken into account.

It should be noted that the different envelopeParts composing the MRSP are input data, so they do not correspond to curves with a physical reality.

Calculation of the Max Speed Profile

Starting from the MRSP, all braking curves are calculated using the overlay system (see here for more details on overlays), i.e. by creating envelopeParts which will be more restrictive than the MRSP. The resulting curve is called Max Speed Profile. This is the maximum speed envelope of the train, taking into account its braking capabilities.

Since braking curves have an imposed end point and the braking equation has no analytical solution, it is impossible to predict their starting point. The braking curves are therefore calculated backwards from their target point, i.e. the point in space where a certain speed limit is imposed (finite target speed) or the stopping point (zero target speed).

Max Speed Profile

For historical reasons in hourly production, braking curves are calculated at SNCF with a fixed deceleration, the so-called hourly deceleration (typically ~0.5m/s²) without taking into account the other forces. This method has therefore also been implemented in OSRD, allowing the calculation of braking in two different ways: with this hourly rate or with a braking force that is simply added to the other forces.

Calculation of the Max Effort Profile

For each point corresponding to an increase in speed in the MRSP or at the end of a stop braking curve, an acceleration curve is calculated. The acceleration curves are calculated taking into account all active forces (traction force, driving resistance, weight) and therefore have a physical meaning.

For envelopeParts whose physical meaning has not yet been verified (which at this stage are the constant speed running phases, always coming from the MRSP), a new integration of the equations of motion is performed. This last calculation is necessary to take into account possible speed stalls in case the train is physically unable to hold its speed, typically in the presence of steep ramps (see this example).

The envelope that results from the addition of the acceleration curves and the verification of the speed plates is called the Max Effort Profile.

Max Effort Profile

At this stage, the resulting envelope is continuous and has a physical meaning from start to finish. The train accelerates to the maximum, runs as fast as possible according to the different speed limits and driving capabilities, and brakes to the maximum. The resulting travel calculation is called the basic running time. It corresponds to the fastest possible route for the given rolling stock on the given route.

Application of allowance(s)

After the calculation of the basic run (corresponding to the Max Effort Profile in OSRD), it is possible to apply allowances. Allowances are additions of extra time to the train’s journey. They are used to allow the train to catch up if necessary or for other operational purposes (more details on allowances here).

A new Allowances envelope is therefore calculated using overlays to distribute the allowance requested by the user over the maximum effort envelope calculated previously.

Allowances

In the OSRD running calculation it is possible to distribute the allowances in a linear way, by lowering all speeds by a certain factor, or in an economic way, i.e. by minimising the energy consumption during the train run.

2.3.5 - Allowances

The purpose of allowances

As explained in the calcul du Max Effort Profile, the basic running time represents the most stretched run normally achievable, i.e. the fastest possible run of the given equipment on the given route. The train accelerates to the maximum, travels as fast as possible according to the different speed limits and driving capabilities, and brakes to the maximum.

This basic run has a major disadvantage: if a train leaves 10 minutes late, it will arrive at best 10 minutes late, because by definition it is impossible for it to run faster than the basic run. Therefore, trains are scheduled with one or more allowances added. The allowances are a relaxation of the train’s route, an addition of time to the scheduled timetable, which inevitably results in a lowering of running speeds.

A train running in basic gear is unable to catch up!

Allowances types

There are two types of allowances:

  • The regularity allowance: this is the additional time added to the basic running time to take account of the inaccuracy of speed measurement, to compensate for the consequences of external incidents that disrupt the theoretical run of trains, and to maintain the regularity of the traffic. The regularity allowance applies to the whole route, although its value may change at certain intervals.
  • The construction allowance: this is the time added/removed on a specific interval, in addition to the regularity allowance, but this time for operational reasons (dodging another train, clearing a track more quickly, etc.)

A basic running time with an added allowance of regularity gives what is known as a standard walk.

Allowance distribution

Since the addition of allowance results in lower speeds along the route, there are a number of possible routes. Indeed, there are an infinite number of solutions that result in the same journey time.

As a simple example, in order to reduce the running time of a train by 10% of its journey time, it is possible to extend any stop by the time equivalent to this 10%, just as it is possible to run at 1/1.1 = 90.9% of the train’s capacity over the entire route, or to run slower, but only at high speeds…

There are currently two algorithms for margin distribution in OSRD: linear and economic.

Linear distribution

Linear allowance distribution is simply lowering the speeds by the same factor over the area where the user applies the allowance. Here is an example of its application:

Python plot linear

The advantage of this distribution is that the allowance is spread evenly over the entire journey. A train that is late on 30% of its journey will have 70% of its allowance for the remaining 70% of its journey.

Economic distribution

The economic distribution of the allowance, presented in detail in this document (MARECO is an algorithm designed by the SNCF research department), consists of distributing the allowance in the most energy-efficient way possible. It is based on two principles:

  1. a maximum speed, avoiding the most energy-intensive speeds
  2. run-on zones, located before braking and steep gradients, where the train runs with the engine off thanks to its inertia, allowing it to consume no energy during this period

Python plot eco with slopes

An example of economic walking. Above, the gradients/ramps encountered by the train. The areas of travel on the track are shown in blue.

3 - How-to Guides

Recipes for addressing key problems and use-cases

How-to guides are recipes. They guide you through the steps involved in addressing key problems and use-cases. They are more advanced than tutorials and assume some knowledge of how OSRD works.

3.1 - Contribute to OSRD

Learn about the how we work, and how you can work with us

3.1.1 - Preamble

An introduction to contributing to OSRD

First off, thanks for taking the time to contribute!

The following chapters are a set of guidelines for contributing to OSRD. These guidelines are mostly not strict rules, it’s probably fine to do things slightly differently. If you have already contributed to open source projects before, you probably won’t be surprised. If you have not, it will probably help a lot!

Communicate

Chatting with other contributors is a great way to speed things up:

Inquire

Just like with any project, changes rely on past work. Before making changes, it is best to learn about what’s already there:

  • read technical documentation
  • read the existing source code related to your project
  • chat with developers who last worked on areas you are interested in

Continue towards initial set-up ‣

3.1.2 - License and set-up

How to set up your development environment? What does our license involve?

License of code contributions

The source code of OSRD is available under the LGPLv3 license. By contributing to the codebase, you consent to the distribution of your changes under the project’s license.

LGPLv3 forbids modifying source code without sharing the changes under the same license: use other people’s work, and share yours!

This constraint does not propagate through APIs: You can use OSRD as a library, framework or API server to interface with proprietary software. Please suggest changes if you need new interfaces.

Set things up

Get the source code

  • Install git.1
  • Open a terminal2 in the folder where the source code of OSRD will be located
  • Run git clone git@github.com:osrd-project/osrd

Launch the application

Thanks to docker, one can easily compile, configure, and run all services after making a change. One can also start only a subset of the services.

Continue towards code contribution ‣


  1. Under Linux, follow installations steps for your distribution on Docker’s documentation ↩︎ ↩︎

  2. Under Windows, open Git Bash ↩︎

  3. Under Windows/WSL, Docker Desktop is recommended ↩︎

3.1.3 - Contribute code

Integrate changes into OSRD

This chapter is about the process of integrating changes into the common code base. If you need help at any stage, open an issue or message us.

OSRD application is split in multiple services written in several languages. We try to follow general code best practices and follow each language specificities when required.

3.1.3.1 - General principles

Please read this first!
  • Explain what you’re doing and why.
  • Document new code with doc comments.
  • Include clear, simple tests.
  • Break work into digestible chunks.
  • Take the time to pick good names.
  • Avoid non well-known abbreviations.
  • Control and consistency over 3rd party code reuse: Only add a dependency if it is absolutely necessary.
  • Every dependency we add decreases our autonomy and consistency.
  • We try to keep PRs bumping dependencies to a low number each week in each component, so grouping dependency bumps in a batch PR is a valid option (see component’s README.md).
  • Don’t reinvent every wheel: as a counter to the previous point, don’t reinvent everything at all costs.
  • If there is a dependency in the ecosystem that is the “de facto” standard, we should heavily consider using it.
  • More code general recommendations in main repository CONTRIBUTING.md.
  • Ask for any help that you need!

Consult back-end conventions ‣

Consult front-end conventions ‣

Continue towards write code ‣

Continue towards tests ‣

3.1.3.2 - Back-end conventions

Coding style guide and best practices for back-end

Python

Python code is used for some packages and integration testing.

Rust

  • As a reference for our API development we are using the Rust API guidelines. Generally, these should be followed.
  • Prefer granular imports over glob imports like diesel::*.
  • Tests are written with the built-in testing framework.
  • Use the documentation example to know how to phrase and format your documentation.
  • Use consistent comment style:
    • /// doc comments belong above #[derive(Trait)] invocations.
    • // comments should generally go above the line in question, rather than in-line.
    • Start comments with capital letters. End them with a period if they are sentence-like.
  • Use comments to organize long and complex stretches of code that can’t sensibly be refactored into separate functions.
  • Code is linted with clippy.
  • Code is formatted with fmt.

Java

3.1.3.3 - Front-end conventions

Coding style guide and best practices for front-end

We use ReactJS and all files must be written in Typescript.

The code is linted with eslint, and formatted with prettier.

Nomenclature

Infrastructure diagram

The applications (osrd eex, osrd stdcm, infra editor, rolling-stock editor) offer views (project management, study management, etc.) linked to modules (project, study, etc.) which contain the components.

These views are made up of components and sub-components all derived from the modules. In addition to containing the views files for the applications, they may also contain a scripts directory which offers scripts related to these views. The views determine the logic and access to the store.

Modules are collections of components attached to an object (a scenario, a rolling stock, a TrainSchedule). They contain :

  • a components directory hosting all components
  • an optional styles directory per module for styling components in scss
  • an optional assets directory per module (which contains assets, e.g. default datasets, specific to the module)
  • an optional reducers file per module
  • an optional types file per module
  • an optional consts file per module

An assets directory (containing images and other files).

Last but not least, a common directory offering :

  • a utils directory for utility functions common to the entire project
  • a types file for types common to the entire project
  • a consts file for constants common to the entire project

Implementation principles

Routing & SLUG

In progress

projects/{nom du projet}/studies/{nom de l'étude}/scenarios/{nom du scenario}

Styles & SCSS

WARNING: in CSS/React, the scope of a class does not depend on where the file is imported, but is valid for the entire application. If you import an scss file in the depths of a component (which we strongly advise against), its classes will be available to the whole application and may therefore cause side effects.

It is therefore highly recommended to be able to easily follow the tree structure of applications, views, modules and components also within the SCSS code, and in particular to nest class names to avoid edge effects, as the compiler will take care of making the necessary hierarchy.

If, for example, we have a rollingStockSelector component which proposes a list of rolling stock rollingStockList represented by rollingStockCard containing an image representing the rolling stock rollingStockImg we should have the following SCSS structure:

.rollinStockSelector {
  .rollingStockList {
    .rollingStockCard {
      .rollingStockImg {
        width: 5rem;
        height: auto;
      }
    }
  }
}

This ensures that the image contained in the rolling stock card inherits the correct css properties .rollinStockSelector.rollingStockList.rollingStockCard.rollingStockImg.

CSS Modules

CSS modules allow scoping CSS styles to a specific component, thereby avoiding conflicts with global class names.

Vite natively supports CSS modules. Ensure that your CSS file has the .module.css extension, for example, styles.module.css.

Using CSS Modules in Components
  1. Create an SCSS file with the .module.scss extension:
/* MyComponent.module.scss */
.container {
  background-color: white;
}

.title {
  font-size: 24px;
  color: #333;
}
  1. Use the classes in your React component:

Vite transforms classes into objects that contain hashed classes (e.g., _container_h3d8bg) and uses them during bundle generation, making the classes unique.

import React from "react";
import styles from "./MyComponent.module.scss";

export function MyComponent() {
  return (
    <div className={styles.container}>
      <h1 className={styles["title"]}>My Title</h1>
    </div>
  );
}

For more information, you can refer to the Vite.js documentation.

Class names, using cx().

Classes are normally added one after the other, in the className="" property.

However, when necessary - class usage tests, concatenation, etc. - we use the classnames library, which recommends the following usage:

<div className="rollingStockSelector">
  <div className="rollingStockList">
    <div className="rollingStockCard w-100 my-2">
      <img
        className={cx("rollingStockImg", "m-2", "p-1", "bg-white", {
          valid: isValid(),
          selected: rollingStockID === selectedRollingStockID,
        })}
      />
    </div>
  </div>
</div>

Classes are separated each in a string and Boolean or other operations are performed in an object that will return - or not - the property name as the class name to be used in CSS.

Store/Redux

Everything that is selector is managed by the view and passed as props to components and sub-components.

Consequently, read and write calls to the store must be made at view level, irrigating the components proposed by the view with props and states.

RTK

Use generated endpoints from openapi.yaml files to consume the backend.

Operation of RTK Query cache

When the data is retrieved from the back, RTK is caching it into the store. If the same endpoint is called again with same parameters, RTK will use the cache data instead of making a new call to the back.

In the store, you will see the editoastApi key containing the cached data of all editoast endpoints:

store Redux

Here for example, the getProjects endpoint is called.

RTK stores the endpoint’s name, as well as the call’s parameters, to form an unique key nomDuEndpoint({ paramètres }). (here getProjects({"ordering":"LastModifiedDesc","pageSize":1000})).

{
  'getProjectsByProjectIdStudiesAndStudyId({"projectId":13,"studyId":16})': {
    status :"fulfilled",
    etc
  },
  'getProjectsByProjectIdStudiesAndStudyId({"projectId":13,"studyId":14})': {
    
  }
}

In this second example, the same endpoint has been called with the ssame projectId parameter, but a different studyId parameter.

Serialization of keys in the cache

The strings used as keys in the cache are essentially the parameter object passed through the JSON.stringify function, which converts a JS object into a string (thus serialized).

Normally, serialization does not preserve the order of object keys. For example, JSON.stringify will not produce the same string with these two objects: { a: 1, b: 2 } and { b: 2, a: 1 }.

RTK will optimize caching by ensuring that the result of a call with {"projectId":13,"studyId":16} or {"studyId":16, "projectId":13} is stored under the same key in the cache.

To see the detailed operation, here is the code for this serialization function:

RTK Serialization Function
const defaultSerializeQueryArgs: SerializeQueryArgs<any> = ({
    endpointName,
    queryArgs,
  }) => {
    let serialized = ''

    const cached = cache?.get(queryArgs)

    if (typeof cached === 'string') {
      serialized = cached
    } else {
      const stringified = JSON.stringify(queryArgs, (key, value) =>
        isPlainObject(value)
          ? Object.keys(value)
              .sort() // keys are reordered here
              .reduce<any>((acc, key) => {
                acc[key] = (value as any)[key]
                return acc
              }, {})
          : value
      )
      if (isPlainObject(queryArgs)) {
        cache?.set(queryArgs, stringified)
      }
      serialized = stringified
    }
    // Sort the object keys before stringifying, to prevent useQuery({ a: 1, b: 2 }) having a different cache key than useQuery({ b: 2, a: 1 })
    return `${endpointName}(${serialized})`
  }
Data subscription

In RTK Query terminology, when a React component calls an endpoint defined in RTK Query, it subscribes to the data.

RTK counts the number of references to the same pair (endpoint, {parameters}). When two components subscribe to the same data, they share the same key in the cache.

import { osrdEditoastApi } from "./api.ts";

function Component1() {
  // component subscribes to the data
  const { data } = osrdEditoastApi.useGetXQuery(1);

  return <div>...</div>;
}

function Component2() {
  // component subscribes to the data
  const { data } = osrdEditoastApi.useGetXQuery(2);

  return <div>...</div>;
}

function Component3() {
  // component subscribes to the data
  const { data } = osrdEditoastApi.useGetXQuery(3);

  return <div>...</div>;
}

function Component4() {
  // component subscribes to the *same* data as ComponentThree,
  // as it has the same query parameters
  const { data } = osrdEditoastApi.useGetXQuery(3);

  return <div>...</div>;
}

Here, Component3 and Component4 will generate only one call to the backend. They subscribe to the same data (same endpoint and same parameter 3). They will share the same key in the cache.

In total, there will be three calls to the backend here, with parameters 1, 2, and 3.

As long as at least one mounted React component calls the osrdEditoastApi.endpoints.getProjectsByProjectId.useQuery hook, for example, the data will be retained in the cache.

Once the last component is unmounted, the data is removed from the cache after 60 seconds (default value).

Translation

Application translation is performed on Transifex. The default language is French. If you add a new translation key, it can be added directly to the code, in all available languages. Please note that if you need to correct a translation, we recommend that you use Transifex, to avoid any conflict.

Rules and important elements

No component should be responsible for updating the data it uses

Only views contain the store selectors, which are then given as props to the components of the module linked to the view.

SCSS is not scoped

A .scss file buried in the tree structure doesn’t guarantee that the classes it contains can only be accessed there, even by importing react (formally forbidden by the way: you must use SCSS import), all declared classes are accessible everywhere.

Prefer a judicious choice of root class name for a given module, and use the tree structure available in the SCSS file.

Imports must follow a specific order

ESLint is setup to automatically sort imports in four import groups, each of them sorted in alphabetical order :

  • React
  • External libraries
  • Internal absolute path files
  • Internal relative path files

Each of these groups will be separated by an empty line.

ESLint will trigger a warning if you don’t follow these guidelines.

You must use the full path for all your imports.

Import links can be relative only if the file to be imported is in the same directory.

TypeScript

import & export

ESLint and Typescript are setup to enforce typed imports for an exported type.

This current setup allows to :

  • Auto typing the import when using a type in a file with autocompletion.
  • Getting 2 errors from each package asking to use type import if you didn’t.

When an import or export contains only types, indicate it with the type keyword.

export type { Direction, DirectionalTrackRange as TrackRange };
import type { typedEntries, ValueOf } from "utils/types";

When an import contains not only types, it will be structured like below, in alphabetical order.

import {
  osrdEditoastApi,
  type ScenarioCreateForm,
} from "common/api/osrdEditoastApi";

This allows to:

  • Improve the performance and analysis process of the compiler and the linter.
  • Make these declarations more readable; we can clearly see what we are importing.
  • Avoid dependency cycles:

dependency cycle

The error disappears with the type keyword

dependency cycle

  • Make final bundle lighter (all types disappear at compilation)

3.1.3.4 - Write code

Integrate changes into OSRD
  1. If you are not used to Git, follow this tutorial

  2. Create a branch
    If you intend to contribute regularly, you can request access to the main repository. Otherwise, create a fork.

  3. Add changes to your branch
    Before you start working, try to split your work into macroscopic steps. At the end of each stop, save your changes into a commit. Try to make commits of logical and atomic units. Try to follow style conventions.

  4. Keep your branch up-to-date

    git switch <your_branch>
    git fetch
    git rebase origin/dev
    

Continue towards commit style ‣

3.1.3.5 - Commit style

A few advises and rules about commit messages

The overall format for git commits is as follows:

component1, component2: imperative description of the change

Detailed or technical description of the change and what motivates it,
if it is not entirely obvious from the title.
  • the commit message, just like the code, must be in english (only ASCII characters for the title)
  • there can be multiple components separated by : in case of hierarchical relationships, with , otherwise
  • components are lower-case, using -, _ or . if necessary
  • the imperative description of the change begins with a lower-case verb
  • the title must not contain any link (# is forbidden)

Ideally:

  • the title should be self-explanatory: no need to read anything else to understand it
  • the commit title is all lower-case
  • the title is clear to a reader not familiar with the code
  • the body of the commit contains a detailled description of the change

Counter-examples of commit titles

To be avoided entirely:

  • component: update ./some/file.ext: specify the update itself rather than the file, the files are technical elements welcome in the body of the commit
  • component: fix #42: specify the problem fixed in the title, links (to issue, etc.) are very welcome in commit’s body
  • wip: describe the work (and finish it)

Welcome to ease review, but do not merge:

  • fixup! previous commit: an autosquash must be run before the merge
  • Revert "previous commit of the same PR": both commits must be dropped before merging

Continue towards sharing your changes ‣

3.1.3.6 - Share your changes

How to submit your code modifications for review?

The author of a pull request (PR) is responsible for its “life cycle”. He is responsible for contacting the various parties involved, following the review, responding to comments and correcting the code following review (you could also check dedicated page about code review).

In the case of a large PR, don’t hesitate to ask several reviewers to organize themselves, or even to carry out the review together, reviewers and author.

  1. Open a pull request
    Once your changes are ready, you have to request integration with the dev branch.

    If possible:

    • Make PR of logical and atomic units too (avoid mixing refactoring, new features and bug fix at the same time).
    • Add a description to PRs to explain what they do and why.
    • Help the reviewer by following advice given in mtlynch article.
    • Add tags area:<affected_area> to show which part of the application have been impacted. It can be done through the web interface.
  2. Take feedback into account
    Once your PR is open, other contributors can review your changes:

    • Any user can review your changes.
    • Your code has to be approved by a contributor familiar with the code.
    • All users are expected to take comments into account.
    • Comments tend to be written in an open and direct manner. The intent is to efficiently collaborate towards a solution we all agree on.
    • Once all discussions are resolved, a maintainer integrates the change.

For large PRs that are bound to evolve over time, keeping corrections during review in separate commits helps reviewers. In the case of multiple reviews by the same person, this can save full re-review (ask for help if necessary):

  1. If you believe somebody forgot to review / merge your change, please speak out, multiple times if needs be.

Suggested workflow

Here’s a suggested workflow.

It is necessary to communicate via instant messaging (Matrix, Slack, etc.) in order to guarantee the smooth flow of PR validation.

sequenceDiagram
  actor A as PR author
  actor R as Reviewer/Maintainer
  
  A->>R: Asks for a review, notifying some people
  R->>A: Answers yes or no
  
  loop Loop between author and reviewer
    R-->>+A: Comments, asks for changes
    A-->>R: Answers to comments or requested changes
    A-->>-R: Makes necessary changes in dedicated "fixups"
    R-->>A: Reviews changes
    R-->>A: Resolves requested changes/conversations if ok
  end
 
  A->>R: Rebase and apply fixups
  R->>A: Checks commits history
  R->>A: Approves or closes the PR
  Note left of R: & Merges if maintainer

Finally continue towards tests ‣

3.1.3.7 - Tests

Recommandations for testing purpose

Back-end

  • Integration tests are written with pytest in the /tests folder.
  • Each route described in the openapi.yaml files must have an integration test.
  • The test must check both the format and content of valid and invalid responses.

Front-end

The functional writing of the tests is carried out with the Product Owners, and the developers choose a technical implementation that precisely meets the needs expressed and fits in with the recommendations presented here.

We use Playwright to write end-to-end tests, and vitest to write unit tests.

The browsers tested are currently Firefox and Chromium.

Basic principles

  • Tests must be short (1min max) and go straight to the point.
  • Arbitrary timeouts are outlawed; a test must systematically wait for a specific event. It is possible to use polling (retry an action - a click for example - after a certain time) proposed in the Playwright’s API.
  • All tests must be parallelizable.
  • Tests must not point to or wait for text elements from the translation, prefer the DOM tree structure or place specific id.
  • We’re not testing the data, but the application and its functionality. Data-specific tests should be developed in parallel.

Data

The data tested must be public data. The data required (infrastructure and rolling stock) for the tests are offered in the application’s json files, injected at the start of each test and deleted at the end, regardless of its result or how it is stopped, including with CTRL+C.

This is done by API calls in typescript before launching the actual test.

The data tested is the same, both locally and via continuous integration.

Atomicity of a test

Each test must be atomic: it is self-sufficient and cannot be divided.

A test will target a single feature or component, provided it is not too large. A test will not test an entire module or application; it will necessarily be a set of tests, in order to preserve test atomicity.

If a test needs elements to be created or added, these operations must be carried out by API calls in typescript upstream of the test, as is done for adding data. These elements must be deleted at the end of the test, regardless of the result or how it is stopped, including by CTRL+C.

This allows tests to be parallelized.

However, in certain cases where it is relevant, a test may contain several clearly explained and justified test subdivisions (several test() in a single describe()).

Example of a test

The requirement: “We want to test the addition of a train to a timetable”.

  1. add the test infrastructure and rolling stock to the database by API calls.
  2. create project, study and scenario with choice of test infrastructure by API calls.
  3. start the test, clicking on “add one or more trains” until the presence of the trains in the timetable is verified
  4. the test passes, fails or is stopped, the project, study and scenario are deleted, along with the test rolling stock and infrastructure by API calls.

NB: the test will not test all the possibilities offered by the addition of trains; this should be a specific test which would test the response of the interface for all scenarios without adding trains.

Continue towards write code ‣

3.1.4 - Review process

How to give useful feedback

The reviewer/maintainer undertakes to carry out the review quickly, and is also responsible for closing request changes, check commit history and quickly merge the pull request if allowed.

We propose you a few tips and recommendations that we think are relevant to a human, relevant and rewarding code review for all contributors:

Here’s a suggested workflow.

It is necessary to communicate via instant messaging (Matrix, Slack, etc.) in order to guarantee the smooth flow of PR validation.

sequenceDiagram
  actor A as PR author
  actor R as Reviewer/Maintainer
  
  A->>R: Asks for a review, notifying some people
  R->>A: Answers yes or no
  
  loop Loop between author and reviewer
    R-->>+A: Comments, asks for changes
    A-->>R: Answers to comments or requested changes
    A-->>-R: Makes necessary changes in dedicated "fixups"
    R-->>A: Reviews changes
    R-->>A: Resolves requested changes/conversations if ok
  end
 
  A->>R: Rebase and apply fixups
  R->>A: Checks commits history
  R->>A: Approves or closes the PR
  Note left of R: & Merges if maintainer

If the reviewer is not maintainer, the PR’s author has the responsability to contact a maintainer to merge the PR. In special cases (especially near feature freeze), maintainer should be found early.

The code review pyramid

3.1.5 - Report issues

Report a bug or suggest an enhancement

Please report anything you deem significant!

Our bug tracking platform is github, so you have to register to report bugs.

Follow this link and pick whatever template fits the best.

Bugs

  • Bug must have a correct description and the bug’s issue template must be filled carefully.
  • Bug must be tagged with (for team members):
    • kind:bug
    • one or several area:<affected_area> if possible, if the affected area is not known leave it blank it will be added later by another team member.
    • one severity:<bug_severity> if possible, if severity is not known leave it blank it will be added later by another team member.
      • severity:minor: User can still use the feature.
      • severity:major: User sometimes can’t use the feature.
      • severity:critical: User can’t use the feature.
  • OSRD team members can change issues’ tags (severity, area, kind, …). You may leave a comment to explain changes.
  • If you are working on a bug or plan to work on a bug, assign yourself to the bug.
  • PRs solving bugs should add a regression tests to ensure that bug will not be back in the future.

3.1.6 -

Here’s a suggested workflow.

It is necessary to communicate via instant messaging (Matrix, Slack, etc.) in order to guarantee the smooth flow of PR validation.

sequenceDiagram
  actor A as PR author
  actor R as Reviewer/Maintainer
  
  A->>R: Asks for a review, notifying some people
  R->>A: Answers yes or no
  
  loop Loop between author and reviewer
    R-->>+A: Comments, asks for changes
    A-->>R: Answers to comments or requested changes
    A-->>-R: Makes necessary changes in dedicated "fixups"
    R-->>A: Reviews changes
    R-->>A: Resolves requested changes/conversations if ok
  end
 
  A->>R: Rebase and apply fixups
  R->>A: Checks commits history
  R->>A: Approves or closes the PR
  Note left of R: & Merges if maintainer

3.2 - Deploy OSRD

Learn how to deploy OSRD in various environments

First of all, we recommend learning about the containers architecture of OSRD.

We will cover how to deploy OSRD within the following setups:

It is also possible to deploy each service of OSRD manually on a system, but we will not cover this topic within this guide.

3.2.1 - Docker Compose

Using docker compose for single node deployment

The OSRD project includes a docker-compose.yml file designed to facilitate the deployment of a fully functional OSRD environment. Primarily intended for development purposes, this Docker Compose configuration can also be adapted for quick, single-node deployments.

Prerequisites

Before proceeding with the deployment, ensure that you have the following installed:

  • Docker
  • Docker Compose

Configuration Overview

The docker-compose.yml file defines the following services:

  1. PostgreSQL: A PostgreSQL database with PostGIS extension.
  2. Redis: A Redis server for caching.
  3. Core: The core OSRD service.
  4. Front: The front-end service for OSRD.
  5. Editoast: A OSRD service responsible for various editorial functions.
  6. Gateway: Serves as the gateway for the OSRD services.
  7. Wait-Healthy: A utility service to ensure all services are healthy before proceeding.

Each service is configured with health checks, volume mounts and necessary environment variables.

Deployment Steps

  1. Clone the Repository: First, clone the OSRD repository to your local machine.
  2. Environment Variables (optional): Set necessary environment variables if you need to adjust some configurations.
  3. Build and Run: Navigate to the directory containing docker-compose.yml and run:
docker-compose up --build

This command builds the images and starts the services defined in the Docker Compose file.

Accessing Services

While all HTTP service are used through the gateway (http://localhost:4000), you can access directly each service using their exposed ports:

  • PostgreSQL: Accessible on localhost:5432.
  • Redis: Accessible on localhost:6379.
  • Core Service: Accessible on localhost:8080.
  • Front-End: Accessible on localhost:3000.
  • Editoast: Accessible on localhost:8090.

Notes and Considerations

  • This setup is designed for development and quick deployments. For production environments, additional considerations for security, scalability and reliability should be addressed.
  • Ensure that the POSTGRES_PASSWORD and other sensitive credentials are securely managed, especially in production deployments.

3.2.2 - Kubernetes with Helm

Using Helm for Kubernetes deployments

The OSRD project’s Helm Chart provides a flexible and efficient way to deploy OSRD services in a Kubernetes environment. This document outlines the configuration options available in the Helm Chart, focusing on each service component.

Prerequisites

Before proceeding with the deployment, ensure that you have the following installed:

  • A Kubernetes cluster up and running
  • A PostgreSQL database with PostGIS
  • A Redis server (used for caching)

The tileserver

Tileserver is the component responsible for generating vector map tiles. It is recommended to separate it from standard Editoast while running a production setup since Editoast cannot be scaled horizontally (it is stateful).

You can visualize the recommended deployment here:

flowchart TD
    gw["gateway"]
    front["front-end static files"]
    gw -- local file --> front
    
    browser --> gw
    gw -- HTTP --> editoast
    gw -- HTTP --> tileserver-1
    gw -- HTTP --> tileserver-2
    gw -- HTTP --> tileserver-n...
    editoast -- HTTP --> core

The Helm chart leverages Kubernete’s HorizontalPodAutoscaler in order to spawn as much tileserver as required for the current workload.

Chart Values Overview

The Helm Chart is configurable through the following values:

Core Service

  • core: Configuration for the core OSRD service.
    • internalUrl: Internal URL for service communication.
    • image: Docker image to use.
    • pullPolicy: Image pull policy.
    • replicaCount: Number of replicas.
    • service: Service type and port configuration.
    • resources, env, annotations, labels, nodeSelector, tolerations, affinity: Various Kubernetes deployment options.

Editoast Service

  • editoast: Configuration for the Editoast service.
    • Includes similar options as core for Kubernetes deployment.
    • init: Initialization configuration.

Tile Server

  • tileServer: Specialized Editoast service that serves only vector map tiles.
    • enabled: Set to true to enable tile server functionality.
    • image: Docker image to use (typically the same as Editoast).
    • replicaCount: Number of replicas, allowing for horizontal scaling.
    • hpa: Horizontal Pod Autoscaler configuration.
    • Other standard Kubernetes deployment options.

Gateway

  • gateway: Configuration for the OSRD gateway.
    • Includes service, ingress, and other Kubernetes deployment options.
    • config: Specific configurations for authentication and trusted proxies.

Deployment

The chart is available at ghcr OCI repository. You can find 2 Helm charts:

To deploy the OSRD services using this Helm Chart:

  1. Configure Values: Adjust the values in the Helm Chart to suit your deployment needs.

  2. Install Chart: Use Helm to install the chart into your Kubernetes cluster.

    helm install osrd oci://ghcr.io/osrd-project/charts/osrd -f values.yml
    

3.3 - Logo

The OSRD logo, its variants, and its use

You can download each logo independently by clicking directly on it, or all the logos compressed into a zip file.

It is advisable to carefully choose the logo you want to use, depending on the background on which you want to display it.

Modification, addition or deletion of the shading other than as presented in the logos are not authorised (this applies more generally throughout the design, the choice to use drop shadows is part of the design considerations, it is not a variable element).

Official

Official for dark backgrounds

White

Black

Favicons, logo without text

🚫 What you can’t do

Too small (< 16px height)

Disproportion

Change the text colour or drop shadow

Changing direction

Deformation

✅ What you can do

Changing the internal colour for a specific event

Use of logo only (without text)

Colors

These colours are those of the logo and should not be confused with those of the overall design of the OSRD interface.

#786ABF #C7B2DE

3.4 - OSRD's design

Colours, fonts, uses…

Everything is presented on a dedicated website https://design.osrd.fr

A “design system” is being developed.

4 - Technical reference

Internal machinery and APIs

Technical reference guides contain technical reference for APIs and other aspects of OSRD’s machinery. They describe how it works and how to use it but assume that you have a basic understanding of key concepts.

4.1 - Architecture

Learn more about OSRD architecture

Architecture documents are meant to help understand how OSRD works overall.

4.1.1 - Data-flow

OSRD’s data-flow diagram

Data-flow diagram

4.1.2 - Services

OSRD’s services architecture

It is a multi-service architecture where several software components interact with each other. This choice was made to ensure the modularity of the code and to guarantee the exploitability of certain OSRD services by external applications.

Current

Services architecture

Target

Mutiple things are planned to be done in the future, including:

  • Add an authenticating reverse proxy service (gateway)
  • Deploy editoast as a scalable map tile service
  • Make the core service scalable
    • Core services will handle only one infrastructure (on a given version) at a time
    • Add a message broker (RabbitMQ) to dispatch queries to the right instance
    • Create a core-controller service that will spawn / kill / scale core services (k8s and docker support)
    • Responsibility for infrastructures loading is moved from the front to the core-controller

Services architecture

4.2 - Design documents

Learn more about how the software was designed

Design documents are meant to help understand and participate in designing software.

Each design document describes a number of things about a piece of software:

  • its goals
  • its constraints
  • how its inputs and outputs were modeled
  • how it works

4.2.1 - Signaling

Describes the signaling model

Description

The signaling layer includes all signals, which respond to track occupancy and reservation. Signals can be of different types, and are modularly loaded. Only their behavior towards the state of the infrastructure and the train’s reaction to signaling matters.

Signals are connected to each other by blocks. Blocks define paths permitted by signaling.

Goals

The signaling system is at the crossroads of many needs:

  • it must allow for realistic signaling simulation in a multi-train simulation
  • it must allow the conflict detection system to determine which resources are required for the train
  • it must allow application users to edit and display signals
  • it must allow for visualization of signals on a map
  • it must allow for automated import from existing databases

Design requirements:

All static data:

  • must enable the front-end to display the signals
  • must enable the infrastructure editor to configure signals
  • must enable the back-end to simulate signals
  • must be close to realistic industry models
  • must allow for the modeling of composite signals, which carry several logical signals within a single physical signal

To simulate signaling:

  • blocks must be generated for both user convenience and pathfinding
  • for each signal, its next compatible signal and protected zones must be deduced
  • the minimum necessary information must be provided to the signaling modules for their operation
  • enable using signaling modules without instantiating a complete simulation
  • allow for signals to be loaded in any order, in parallel

For speed limits:

  • some speed limits have to be enforced depending on the train path’s routes
  • speed limits can be configured to have an impact on signaling
  • ability to link the reaction of the train to a signal, and a speed limit

Assumptions

  • Each physical signal can be decomposed into a list of logical signals, all of which are associated with a signaling system.
  • Blocks have a type.
  • It is possible to compute, given a signal alone, its block and route delimiting properties.
  • Blocks never cross route boundaries.
  • Blocks which are not covered by routes do not exist, or can be ignored.
  • At any time, trains only use one signaling system capable of transmitting movement authority.
  • Speed limits change depending on which route is in use, and affect how signals behave
  • Some speed limits have an impact on signaling, and some do not
  • Either a speed limits differentiates per train category, or requires dynamic signaling, but not both

Operations

  • Instantiating a view creates a framework for observing signals
  • Planning the path signals to the view the blocks that the train will traverse
  • Observing a signal subscribe to the state of a signal (through the view)
  • Passing a signal signals that a signal has been passed by the train (through the view)

Research Questions

  • Are there any blocks that overlap the end of a route? SNCF(Loïc): No.
  • Are there any signals which rely on the state of the one after next signal? SNCF(Loïc): No.
  • Are there signals that change behavior based on the active block in front of them? SNCF(Loïc): Yes, for slowdowns.
  • Are there signals that are the start of blocks of different types? SNCF(Loïc): Yes.
  • Can the behavior of a signal depend on which block is active after the end of the current block? SNCF(Loïc): Yes, with slowdowns or blinking yellow.
  • Do some signaling systems need additional information in the blocks? SNCF(Loïc): Kind of, there are slowdowns, but it’s not specifically carried by the block.
  • Is it nominal for a train to have multiple active signaling systems at the same time? SNCF(Loïc): No.
  • are there any signals which depend on which route is set, but are not route delimiters? SNCF(Loïc): Yes, see Sémaphore Clignotant
  • how do speed limits per train category and dynamic signaling interact? SNCF(Nicolas): There shouldn’t be any speed limit per category signaled by dynamic signaling
  • are there any signals which depend on the state of multiple routes? SNCF(Loïc): No

4.2.1.1 - Signaling systems

Each signaling system has:

  • A unique identifier (a string).
  • Its signal state type, which enables deducing:
    • The graphical representation of the signal
    • How a train would react to the signal
    • If the signal state constrains Movement Authority
  • The signal parameter types, names and description, which enable front-end edition of signal parameters.
  • The block and route conditions, which enable evaluating whether a signal delimits blocks or routes, given its parameters.
{
    # unique identifier for the signaling system
    "id": "BAL",
    "version": "1.0",
    # the schema of the dynamic state of signals of this type
    "signal_state": [
        {"kind": "enum", "field_name": "aspect", values: ["VL", "A", "S", "C"]},
        {"kind": "flag", "field_name": "ralen30"},
        {"kind": "flag", "field_name": "ralen60"},
        {"kind": "flag", "field_name": "ralen_rappel"}
    ],
    # describes static properties of the signal
    "signal_properties": [
        {"kind": "flag", "field_name": "Nf", "display_name": "Non-franchissable"},
        {"kind": "flag", "field_name": "has_ralen30", "default": false, "display_name": "Ralen 30"},
        {"kind": "flag", "field_name": "has_rappel30", "default": false, "display_name": "Rappel 30"},
        {"kind": "flag", "field_name": "has_ralen60", "default": false, "display_name": "Ralen 60"},
        {"kind": "flag", "field_name": "has_rappel60", "default": false, "display_name": "Rappel 60"}
    ],
    # describes dynamic properties of the signal. These can be set on a per-route basis
    "signal_parameters": [
        {"kind": "flag", "field_name": "short_block", "default": false, "display_name": "Short block"},
        {"kind": "flag", "field_name": "rappel30", "default": false, "display_name": "Rappel 30"},
        {"kind": "flag", "field_name": "rappel60", "default": false, "display_name": "Rappel 60"}
    ],

    # these are C-like boolean expressions:
    # true, false, <flag>, <enum> == value, &&, || and ! can be used

    # used to evaluate whether a signal is a block boundary. Only properties can be used, not parameters.
    "block_boundary_when": "true",

    # used to evaluate whether a signal is a route boundary. Only properties can be used, not parameters.
    "route_boundary_when": "Nf",

    # A predicate used evaluate whether a signal state can make a train slow down. Used for naive conflict detection.
    "constraining_ma_when": "aspect != VL"
}

4.2.1.2 - Blocks and signals

Blocks

The blocks have several attributes:

  • A signaling system that corresponds to that displayed by its first signal.
  • A path, which is a list of direction + detector pairs (just like route paths).
  • An entry signal, (optional when the block starts from a buffer stop).
  • Intermediate signals, if any (only used by systems with distant signals).
  • An exit signal, (optional when the block ends at a buffer stop).

The path is expressed from detector to detector so that it can be overlayed with the route graph.

A few remarks:

  • There can be multiple blocks with the same path, as long as they have different signaling systems. Trains only use a block at a time, and ignore others.
  • Blocks do not have a state: one can rely on the dynamic state of the zones that make it up.
  • Blocks are used to figure out which signals protect which zones in a given context.

Dependencies

  • route graph. For each route:
    • waypoints: List<DiDetector>
    • signals: OrderedMap<Position, UnloadedSignal>
    • speed_limits: RangeMap<Position, SpeedLimit>, including the logic for train category limits
  • signaling systems
  • drivers

Signals

Physical signal are made up of one or more logical signals, which are displayed as a single unit on the field. During simulation, logical signals are treated as separate signals.

Each logical signal is associated with a signaling system, which defines if the signal transmits Movement Authority, speed limits, or both.

Logical signals have one or more drivers. Signal drivers are responsible for computing signal state. Any given signal driver only works for a given pair of signaling systems, where the first one is displayed by the signal, and the second is the one displayed by the next signal.

When a logical signal has an empty driver list, its content is deduced from neighboring signals.

For example, a BAL signal that is both a departure of the TVM block and a departure of the BAL block, it will have two drivers: BAL-BAL and BAL-TVM.

Announcing speed limits

When a signal announces a speed limit, it needs to be linked with a speed section object. This is meant to enable smooth transitions between the reaction to the announce signal, and the limit itself.

If multiple signals are involved in the announce process, only the one closest to the speed limit has to have this attribute set.

{
    # ...
    "announce_speed_section": "${SPEED_SECTION_ID}"
    # ...
}

Conditional parameters

Some signal parameters vary depending on which route is set. On each signal, an arbitrary number of rules can be added. If the signal is last to announce a speed limit, it must be explicitly mentionned in the rule.

{
    # ...
    "announce_speed_section": "${SPEED_SECTION_ID}",
    "default_parameters": {"short_block": "false"},
    "conditional_parameters": [
        {
            "on_route": "${ROUTE_ID}",
            "announce_speed_section": "${SPEED_SECTION_ID}",
            "parameters": {"rappel30": "true", "short_block": "true"}
        }
    ]
    # ...
}

Signal parameter values are looked up in the following order:

  1. per route conditional parameters
  2. per signal default parameters (default_parameters)
  3. parameter default value, from the signaling system’s .signal_parameters[].default

Serialized format

The serialized / raw format is the user-editable description of a physical signal.

Raw signals have a list of logical signals, which are independently simulated units sharing a common physical display. Each logical signal has:

  • a signaling system
  • user-editable properties, as specified in the signaling system description
  • a list of default parameters, which can get overriden per-route
  • an optional announced speed section, which can get overriden per-route
  • a list of allowed next signaling systems, which are used to load drivers

For example, this signal encodes a BAL signal which:

  • starts both a BAL and a TVM block
  • announces speed limit B on all routes except route A, where speed limit C is announced
  • on route A, the block is shorter than usual
{
    # signals must have location data.
    # this data is omitted as its format is irrelevant to how signals behave

    "logical_signals": [
        {
            # the signaling system shown by the signal
            "signaling_system": "BAL",
            # the settings for this signal, as defined in the signaling system manifest
            "properties": {"has_ralen30": "true", "Nf": "true"},
            # this signal can react to BAL or TVM signals
            # if the list is empty, the signal is assumed to be compatible with all following signaling systems
            "next_signaling_systems": ["BAL", "TVM"]
            "announce_speed_section": "${SPEED_SECTION_B}",
            "default_parameters": {"rappel30": "true", "short_block": "false"},
            "conditional_parameters": [
                {
                    "on_route": "${ROUTE_A}",
                    "announce_speed_section": "${SPEED_SECTION_C}",
                    "parameters": {"short_block": "true"}
                }
            ]
        }
    ]
}

For example, this signal encodes a BAL signal which starts a BAL block, and shares its physical display / support with a BAPR signal starting a BAPR block:

{
    # signals must have location data.
    # this data is omitted as its format is irrelevant to how signals behave

    "logical_signals": [
        {
            "signaling_system": "BAL",
            "properties": {"has_ralen30": "true", "Nf": "true"},
            "next_signaling_systems": ["BAL"]
        },
        {
            "signaling_system": "BAPR",
            "properties": {"Nf": "true", "distant": "false"},
            "next_signaling_systems": ["BAPR"]
        }
    ]
}

Signal description strings

Signal definitions need to be condensed into a shorter form, just to look up signal icons. In order to store this into MVT map tiles hassle free, it’s condensed down into a single string.

It looks something like that: BAL[Nf=true,ralen30=true]+BAPR[Nf=true,distant=false] It’s built as follows:

  • a list of logical signals, sorted by signaling system name, separated by +
  • inside each logical signal, signal properties are sorted by name, enclosed in square brackets and separated by ,

Dependencies

For signal state evaluation:

  • train path in blocks
  • portion of the path to evaluate
  • drivers
  • state of the zones in the section to evaluate

4.2.1.3 - Speed limits

Describes how speed limits work

Description

Railway infrastructure has a surprising variety of speed limits:

  • some are known by the driver, and not announced at all
  • some are announced by fixed signs regardless of where the train goes
  • some are announced by fixed signs, depending on where the train path goes
  • some are announced by dynamic signals regardless of where the train goes
  • some are announced by dynamic signals, depending on where the train path goes

Data model

{
    # unique speed limit identifier
    "id": "...",

    # A list of routes the speed limit is enforced on. When empty
    # or missing, the speed limit is enforced regardless of the route.
    #
    # /!\ When a speed section is announced by signals, the routes it is
    # announced on are automatically filled in /!\
    "on_routes": ["${ROUTE_A}", "${ROUTE_B}"]
    # "on_routes": null, # not conditional
    # "on_routes": [], # conditional

    # A speed limit in meters per second.
    "speed_limit": 30,

    # A map from train tag to speed limit override. If missing and
    # the speed limit is announced by a signal, this field is deduced
    # from the signal.
    "speed_limit_by_tag": {"freight": 20},

    "track_ranges": [{"track": "${TRACK_SECTION}", "begin": 0, "end": 42, "applicable_directions": "START_TO_STOP"}],
}

Design considerations

Where to put the speed limit value

When a speed limit is announced by dynamic signaling, we may be in a position where speed limit value is duplicated:

  • once in the signal itself
  • once in the speed limit

There are multiple ways this issue can be dealt with:

✅ Mandatory speed limit value in the speed section

Upsides:

  • simpler to implement, works even without train reactions to signals nor additional API

Downsides:

  • more work on the side of users
  • room for inconsistencies between the speed limit announced by signaling, and the effective speed limit

❌ Deduce the signal constraint from the speed limit

This option was not explored much, as it was deemed awkward to deduce signal parameters from a speed limit value.

❌ Deduce the speed limit from the signal

Make the speed limit value optional, and deduce it from the signal itself. Speed limits per tag also have to be deduced if missing.

Upsides:

  • less work for users
  • lessens the likelyhood of configuration mismatches

Downsides:

  • not all signaling systems work well with this. It may be difficult to deduce the announced speed limit from a signal configuration, such as with TVM.
  • speed limits have to be deduced, which increases implementation complexity

Speed limit announced by dynamic signaling often start being enforced at a specific location, which is distinct from the signal which announces the speed limit.

To allow for correct train reactions to this kind of limits, a link between the announce signal and the speed limit section has to be made at some point.

❌ Automated matching of signals and speed sections

Was not deemed realistic.

Was deemed to be awkward, as signaling is currently built over interlocking. Referencing signaling from interlocking creates a circular dependency between the two schemas.

Add a list of (route, signal) tuples to speed sections.

Upside:

  • a link with the signal can be made with creating the speed section

Downside:

  • Creates a dependency loop between speed limits and signaling. Part of the parsing of speed limit has to be deferred.
  • Signals parameters also have to be set per route, which is done in the signal. Having per-route options on both sides doubles the work.

❌ Inlining speed limit definitions into signals

Introduces a new type of speed limit, which are announced by signals. These speed limits are directly defined within signal definitions.

{
    # ...
    "conditional_parameters": [
        {
            "on_route": "${ROUTE_ID}",
            "speed_section": {
                "speed_limit": 42,
                "begin": {"track": "a", "offset": 10},
                "end": {"track": "b", "offset": 15},
            },
            "parameters": {"rappel30": "true", "short_block": "true"}
        }
    ]
    # ...
}

Upsides:

  • straightforward infrastructure edition experience for speed sections announced by a single signal

Downsides:

  • creates two separate kinds of speed limits:
    • can cause code duplication
    • could make later changes of the data model trickier
    • it’s unclear whether the criterion used to make this partition is appropriate
  • speed sections created directly inside signals can only be announced by a single signal, which could be an issue for speed sections which apply to very large areas, and are announced by multiple signals (such as one for each direction)
  • the cost of reversing this decision could be fairly high
{
    # ...
    "conditional_parameters": [
        {
            "on_route": "${ROUTE_ID}",
            "announced_speed_section": "${SPEED_SECTION_ID}",
            "parameters": {"rappel30": "true", "short_block": "true"}
        }
    ]
    # ...
}

Upsides:

  • single unified way of declaring speed limits
  • very close to the current implementation

Downsides:

  • adds a level of indirection between the signal and the speed section
  • the edition front-end has to be smart enough to create / search speed sections from the signal edition menu

Speed limits by route

Some speed limits only apply so some routes. This relationship needs to be modeled:

  1. speed limits could have a list of routes they apply on
  2. routes could have a list of speed limits they enforce
  3. the routes a speed limit apply on could be deduced from its announce signals, plus an explicit list of routes per speed section

We took option 3.

4.2.1.4 - Simulation lifecycle

Tells the story of how signaling infrastructure is loaded and simulated on

Loading Signal Parameters

The first step of loading the signal is to characterize the signal in the signaling system. This step produces an object that describes the signal.

During the loading of the signal:

  • the signaling system corresponding to the provided name is identified
  • the signal properties and parameters are loaded and validated according to the signaling system spec
  • the signal’s block and route delimiting properties are evaluated

Loading the Signal

Once signal parameters are loaded, drivers can be loaded. For each driver:

  • The driver implementation is identified from the (signaling_system, next_signaling_system) pair.
  • It is verified that the signaling system outgoing from the driver corresponds to the one of the signal.
  • It is verified that there is no existing driver for the incoming signaling system of the driver.

This step produces a Map<SignalingSystem, SignalDriver>, where the signaling system is the one incoming to the signal. It then becomes possible to construct the loaded signal.

Constructing Blocks

  • The framework creates blocks between signals following the routes present in the infrastructure, and the block properties of the signals.
  • Checks are made on the created block graph: it must always be possible to choose a block for each signal and each state of the infrastructure.

Block validation

The validation process helps to report invalid configurations in terms of signaling and blockage. The validation cases we want to support are:

  • The signaling system may want to validate, knowing if the block starts / ends on a buffer:
    • the length of the block
    • the spacing between the block signals, first signal excluded
  • Each signal in the block may have specific information if it is a transition signal. Therefore, all signal drivers participate in the validation.

In practice, there are two separate mechanisms to address these two needs:

  • The signaling system module is responsible for validating signaling within blocks.
  • Signal drivers take care of validating transitions between blocks.
extern fn report_warning(/* TODO */);
extern fn report_error(/* TODO */);

struct Block {
   startsAtBufferStop: bool,
   stopsAtBufferStop: bool,
   signalTypes: Vec<SignalingSystemId>,
   signalSettings: Vec<SignalSettings>,
   signalPositions: Vec<Distance>,
   length: Distance,
}

/// Runs in the signaling system module
fn check_block(
   block: Block,
);


/// Runs in the signal driver module
fn check_signal(
   signal: SignalSettings,
   block: Block, // The partial block downstream of the signal - no signal can see backward
);

Signal lifecycle

Before a train startup:

  • the path a of the train can be expressed is given, both as routes and blocks
  • the signal queue a train will encounter is established

During the simulation:

  • along a train movement, the track occupation before it are synthesized
  • when a train observes a signal, its state is evaluated

Signal state evaluation

Signals are modeled as an evaluation function, taking a view of the world and returning the signal state


enum ZoneStatus {
   /** The zone is clear to be used by the train */
   CLEAR,
   /** The zone is occupied by another train, but otherwise clear to use */
   OCCUPIED,
   /** The zone is incompatible. There may be another train as well */
   INCOMPATIBLE,
}

interface MAView {
    /** Combined status of the zones protected by the current signal */
    val protectedZoneStatus: ZoneStatus
    val nextSignalState: SignalState
    val nextSignalSettings: SignalSettings
}

fun signal(maView: MAView?): SignalState {
    // ...
}

The view should allow access to the following data:

  • a synthetized view of zones downstream until the end of the train’s MA
  • the block chain
  • the state of downstream signals which belong to the current block chain

Signaling view path

The path along which the MAView and SpeedLimitView live is best expressed using blocks:

  • blocks can be added to extend the view along the path of a train
  • the view can be reduced by removing blocks, as the train passes by signals

Simulation outside the train path

Everything mentionned so far was designed to simulate signals between a train the end of its movement authority, as all others signals have no influence over the behavior of trains (they cannot be seen, or are disregarded by drivers).

Nevertheless, one may want to simulate and display the state of all signals at a given point in time, regardless of which signals are in use.

Simulation rules are as follows:

  • if a signal starts blocks which have differing paths, it is simulated as if it were at the end of a route
  • if a signal starts blocks which all start the same path, it is simulated in the same view as the next signals in this path

4.2.2 - Conflict detection

Detect unrealistic timetables

This document is a work in progress

Conflict detection is the process of looking for timetable conflicts. A timetable conflict is any predictable condition which disrupts planned operations. Planned operations can be disrupted if a train is slowed down, prevented from proceeding, or delayed.

One of the core features of OSRD is the ability to automatically detect some conflicts:

  • spacing conflicts: insufficient spacing between trains sharing the same path
  • routing conflicts: insufficient spacing between trains with intersecting paths

Some other kinds of conflicts may be detected later on:

  • maintenance conflicts: planned maintenance disrupts the path of a train
  • power delivery conflicts: combined power delivery requirements exceeds capacity

Conflict detection relies on interlocking and signaling modeling and simulation to:

  1. figure out what each actor requires to perform its duty undisturbed
  2. detect conflicting requirements

Design constraints

The primary design goals are as follows:

  • enable threading new train paths into an existing timetable (see STDCM)
  • produce conflicts which can be linked back to a root cause
  • operate in way that can be visualized and interpreted
  • scale to real world timetables: millions of yearly trains, tens of thousands of daily trains

In addition to these goals, the following constraints apply:

  • it must be possible to thread new train paths into timetables with existing conflicts
  • it must not cause false-negatives: if no conflicts are detected, a multi-train simulation of the same timetable must not yield any slowdowns
  • it cannot rely on data we do not have
  • it has to enable later support of mobile block systems
  • it has to rely on existing signaling and interlocking simulation
  • it has to enable detecting conflicts regardless of the signaling system in use
  • it has to support transitions between signaling systems
  • it has to support conflicts between different signaling systems

Conflict modeling

Actors are objects which cause resources to be used:

  • train paths (or someone / something on the behalf of the train)
  • maintenance work

Actors need resources to be available to proceed, such as:

  • zones, which have one state per way to traverse it
  • switches, which have one state per position
  • station platforms, which could be used to prevent two large trains from occupying both sides of a tiny platform

Actor emit resource requirements, which:

  • describe the need of an actor for a resource, for a given time span
  • describe what the resource is needed for
  • detail how the resource is used, such as switch position, zone entry and exit

Resource requirements can turn out to be either satisfied or conflicting with other requirements, depending on compatibility rules.

Compatibility rules differ by requirement purpose and resource type. For example:

  • spacing requirements are exclusive: simultaneous requirements for the same resource are conflicting
  • zone and switch requirements are shareable: simultaneous requirements are satisfied if the resource configuration is identical

For conflict detection to work, resource requirements have to be at least as extensive as what’s required to guarantee that a train path will not be disturbed.

Routing conflicts

Context

For trains to proceed safely along their planned path:

  • switches have to be moved in the appropriate position
  • level crossings have to activate
  • risks of collision with other trains have to be mitigated

In practice, the path of trains is partitioned into routes, which when set, ensure a train can safely follow the route.

Routes have the following lifestyle:

  • As a train approaches the start of one of its routes, it is called by an operator. If all resources required to safely use the route are available, switches and level crossings start to move. If a resources is not available, e.g. because another train has reserved a section of track, this process is delayed until all conditions are met.
  • Once all resources are configured and reserved, the route is set and ready to be followed. Before that point, the entry of the route was protected by signaling, which prevented the train from moving past the entry point.
  • As the train moves along the route, it is destroyed. When the tail of the trail releases key detectors along the route, resources before this detector are released, and can this be reserved by other routes.

For a train to proceed through a route unimpeded, the following things have to happen:

  • The route has to be set before the train arrives, and before it is slowed down by signaling.
  • The route has to be called, so that is it set in time.
  • All resources required for the route to start setting at call time have to be available.

Generating requirements

struct RouteRequirement {
    route: RouteId,
    set_deadline: Time,
    zone_requirements: Vec<RouteZoneRequirement>,
}

struct RouteZoneRequirement {
    zone: ZoneId,
    entry_det: DirDetectorId,
    exit_det: DirDetectorId,
    release_time: Time,
    switches: Map<SwitchId, SwitchConfigId>,
}

Routing requirements are generated by the following algorithm:

  • Compute the set deadline using signaling simulation. The set deadline is the point in time at which the train would be slowed down if the route were not set.
  • For each zone in each route, simulate when it would be released, and thus not required anymore.

Route overlaps are not yet supported.

Requirement compatibility rules

Requirement compatibility is evaluated for all RouteZoneRequirements, grouped by zone. Requirements A and B, ordered such that A.set_deadline <= B.set_deadline, are compatible if and only if either:

  • their active time span does not overlap, such that A.release_time <= (B.set_deadline - activation_time), where the activation time is the delay required to reconfigure from A.switches to B.switches.
  • (A.entry_det, A.exit_det, A.switches) == (B.entry_det, B.exit_det, B.switches)

Spacing conflicts

Context

Even if interlocking mitigates some of the risks associated with operating trains, a major one is left out: head to tail collisions, caused by insufficient spacing.

This responsibility is handled by signaling, which conveys both interlocking and spacing constraints.

Signaling helps trains slow down until the end of their movement authority, which is either:

  • behind the tail of the next train
  • at the end of the last route set for this train

Spacing requirements are emitted for zones which if occupied, would cause a slowdown, and zones occupied by the train

Generating requirements

struct SpacingRequirement {
    zone: ZoneId,
    begin_time: Time,
    end_time: Time,
}

Every time the driver sees a signal, generate updated spacing requirements by calculating which zones, if occupied, would trigger a slowdown:

  • start by assuming the zone just after the head of the train is occupied
  • until the train is not slowed down, move the occupied section one zone further away from the train

Requirement compatibility rules

Requirement compatibility is evaluated for all SpacingRequirements, grouped by zone.

Requirements A and B are compatible if and only if their [begin_time, end_time] ranges do not overlap.

Incremental requirement generation

Routing requirements

sequenceDiagram
    participant client as Client
    participant gen as Routing resource generator
    client ->> gen: initial path + train movement
    loop
        gen ->> client: prefix path extension needed
        client ->> gen: extra prefix path + train movement
    end
    gen ->> client: resource requirements

After an initial path is given, the requirement generator can ask for more prefix path (before the start of the route). The client responds with:

  • the extra prefix path
  • the movement of the train over time on the given prefix path

If the initial path has multiple routes, the last route is the one resource requirements are emitted for.

Spacing requirements

sequenceDiagram
    participant client as Client
    participant gen as Spacing resource generator
    client ->> gen: initial path + train movement
    loop
        gen ->> client: postfix path extension needed
        client ->> gen: extra postfix path
    end
    gen ->> client: resource requirements

After an initial path is given, the requirement generator can ask for more postfix path (before the start of the route).

Visualizing requirements

Full-page requirements diagram

4.2.3 - Search for last-minute train slots (STDCM)

OSRD can be used to find a slot for a train in an already established timetable, without causing conflicts with other trains.

The acronym STDCM (Short Term Digital Capacity Management) is used to describe this concept in general.

4.2.3.1 - Business context

Some definitions:

Capacity

Capacity, in this context, is the ability to reserve infrastructure elements to allow the passage of a train.

Capacity is expressed in both space and time: the reservation of an element can block a specific zone that becomes inaccessible to other trains, and this reservation lasts for a given time interval.

It can be displayed on a chart, with the time on the horizontal axis and the distance traveled on the vertical axis.

Space-time chart

Example of a space-time chart displaying the passage of a train.

The colors here represent aspects of the signals, but display a consumption of the capacity as well: when these blocks overlap for two trains, they conflict.

There is a conflict between two trains when they reserve the same object at the same time, in incompatible configurations.

Space-time chart with conflict

Example of a space-time graph with a conflict: the second train is faster than the first one, they are in conflict at the end of the path, when the rectangles overlap.

When simulating this timetable, the second train would be slowed down by the yellow signals, caused by the presence of the first train.

Train slots

A train slot corresponds to a capacity reservation for the passage of a train. It is fixed in space and time: the departure time and the path taken are known. On the space-time charts in this page, a train slot corresponds to the set of blocks displayed for a train.

Note: in English-speaking countries, these are often simply called “train paths”. But in this context, this name would be ambiguous with the physical path taken by the train.

The usual procedure is for the infrastructure manager (e.g. SNCF Réseau) to offers train slots for sale to railway companies (e.g. SNCF Voyageurs).

At a given date before the scheduled day of operation, all the train paths are allocated. But there may be enough capacity to fit more trains. Trains can fit between scheduled slots, when they are sufficiently far apart or have not found a buyer.

The remaining capacity after the allocation of train paths is called residual capacity. This section explains how OSRD looks for train slots in this residual capacity.

4.2.3.2 - Train slot search module

This module handles the search for solutions.

To reduce the problem to its simplest form and for easy and efficient testing, inputs and outputs are strongly simplified and abstracted.

To summarize its behavior: the solution space is described as a graph that encodes locations, time, and speed. A pathfinding is run on this graph to find a solution.

This graph could, in a way, be seen as a decision tree, but different paths can lead to the same node.

4.2.3.2.1 - Input format

This module takes several parameters to find a path:

  • A graph describing the physical infrastructure
  • Unavailable sections in time intervals
  • Origin and destination point(s)
  • Departure time interval
  • Maximum run time
  • Simulation parameters (rolling stock, time step, allowances, …)

Among those, the first 3 require more explanations.

Infrastructure graph

Today, the input graph is the SignalingRoutes graph. But it can be any graph that represents the physical infrastructures and the paths that can be used.

The only constraints are: the edges must have a length, and it must be possible to compute running time on parts of an edge.

Unavailable sections

This input encodes the areas that are unavailable because of capacity constraints.

Every edge has a set of “occupancy block”. A block is made of these elements:

  • Start offset
  • End offset
  • Start time
  • End time

Offsets are relative to the start of the edge. Each block means that the head of the train cannot be located in the edge segment during the given interval.

These blocks include the grid margin. If the solution needs to have an x seconds margin before the train passage, every block ends x seconds later.

To give an example, with the following schedule, a 42m long train, and 10m sight distance:

Unavailable section example

  • The occupancy of the block 1 from t=0 to t=300 makes it unavailable in its entirety during this time
  • The last 10 meters of block 1 are unavailable from t=300 to t=360, because the signal at the start of block 2 must be green when the conductor sees it. It is possible to consider that this unavailability block starts at t=130 (when the next signal isn’t green), as blocks can overlap.
  • The occupancy of block 2 from t=130 to t=360 makes it unavailable during this time. It is also unavailable from t=0, as the presence of a train in this block would cause a warning on block 1.
  • The first 42 meters of block 3 are unavailable from t=0 to t=360, because the tail of the train must have left the block 2 at this time.
  • The rest of block 3 is unavailable in its entirety from t=280 to t=360

4.2.3.2.2 - Encoding the solution space

General principle

The problem is still a pathfinding problem in a given graph. Once the problem is encoded as a graph search, it is possible to reuse our existing tools for this purpose.

We consider the product graph of position, time, and speed. This means that every graph element contains these 3 variables (among other things)

Every graph edge is computed using running-time calculation to get speed and positions as functions of time.

Graphical representation

Space is encoded with a graph that contains the physical infrastructure.

product graph (1/3)

It is then “duplicated” at different times.

product graph (2/3)

The nodes are then linked together in a way that reflects travel time.

product graph (3/3)

Notes

  • The graph is constructed on the fly as it is explored.
  • It is discretized in time, to evaluate which nodes have already been visited. We keep full accuracy of time values, but two nodes at the same place and close times are considered identical.
  • Every edge is computed with a running time computation.
  • Speed isn’t discretized or considered to check visited nodes, it’s only used to compute time.
  • By default, the train always goes as fast as it can (while still following standard allowances). It only slows down when necessary.

Example

For example, with the following infrastructure, using the track graph: Example infra

Exploring the solution graph can give the following result: Représentation du graphe

4.2.3.2.3 - Discontinuities and backtracking

The discontinuity problem

When a new graph edge is visited, a simulation is run to evaluate its speed. But it is not possible to see beyond the current edge. This makes it difficult to compute braking curves, because they can span over several edges.

Discontinuity

This example illustrates the problem: by default the first edge is explored by going at maximum speed. The destination is only visible once the second edge is visited, which doesn’t leave enough distance to stop.

Solution : backtracking

To solve this problem, when an edge is generated with a discontinuity in the speed envelopes, the algorithm goes back over the previous edges to create new ones that include the decelerations.

To give a simplified example, on a path of 4 edges where the train can accelerate or decelerate by 10km/h per edge:

Discontinuity (edge version, 1/2)

For the train to stop at the end of route 4, it must be at most at 10km/h at the end of edge 3. A new edge is then created on edge 3, which ends at 10km/h. A deceleration is computed backwards from the end of the edge back to the start, until the original curve is met (or the start of the edge).

In this example, the discontinuity has only been moved to the transition between edges 2 and 3. The process is then repeated on edge 2, which gives the following result:

Discontinuity (edge version, 2/2)

Old edges are still present in the graph as they can lead to other solutions.

4.2.3.2.4 - Conflict avoidance

While exploring the graph, it is possible to end up in locations that would generate conflicts. They can be avoided by adding delay.

Shifting the departure time

The departure time is defined as an interval in the module parameters: the train can leave at a given time, or up to x seconds later. Whenever possible, delay should be added by shifting the departure time.

for example : a train can leave between 10:00 et 11:00. Leaving at 10:00 would cause a conflict, the train actually needs to enter the destination station 15 minutes later. Making the train leave at 10:15 solves the problem.

In OSRD, this feature is handled by keeping track, for every edge, of the maximum duration by which we can delay the departure time. As long as this value is enough, conflicts are avoided this way.

This time shift is a value stored in every edge of the path. Once a path is found, the value is summed over the whole path. This is added to the departure time.

For example :

  • a train leaves between 10:00 and 11:00. The initial maximum time shift is 1:00.
  • At some point, an edge becomes unavailable 20 minutes after the train passage. The value is now at 20 for any edge accessed from here.
  • The departure time is then delayed by 5 minutes to avoid a conflict. The maximum time shift value is now at 15 minutes.
  • This process is applied until the destination is found, or until no more delay can be added this way.

Engineering allowances

Once the maximum delay is at 0, the delay needs to be added between two points of the path.

Engineering allowances (1/2)

The idea is the same as the one used to fix speed discontinuities: new edges are created, replacing the previous ones. The new edges have an engineering allowance, to add the delay where it is possible.

Engineering allowances (2/2)

computing an engineering allowance is a feature of the running-time calculation module. It adds a given delay between two points of a path, without affecting the speeds on the rest of the path.

4.2.3.2.5 - Standard allowance

The STDCM module must be usable with standard allowances. The user can set an allowance value, expressed either as a function of the running time or the travelled distance. This time must be added to the running time, so that it arrives later compared to its fastest possible running time.

For example: the user can set a margin of 5 minutes per 100km. On a 42km long path that would take 10 minutes at best, the train should arrive 12 minutes and 6 seconds after leaving.

This can cause problems to detect conflicts, as an allowance would move the end of the train slot to a later time. The allowance must be considered when we compute conflicts as the graph is explored.

The allowance must also follow the MARECO model: the extra time isn’t added evenly over the whole path, it is computed in a way that requires knowing the whole path. This is done to optimize the energy used by the train.

Linear margin expressed as a function of time

As a first step, the problem is solved with a linear margin, i.e. added evenly over the whole path. The speed is simply modified by a constant factor.

The envelopes. computed during the graph traversal are not modified, they are always at maximum speed. But they are paired with a speed factor, which is used to compute running time and to evaluate conflicts.

The final envelope, with the allowance, is only computed once a path is found.

Linear margin expressed as a function of distance

The principle is generally the same, but with an extra difficulty: the speed factor isn’t constant over the path. When a train goes faster, it travels more distance in the same time, which increases the allowance time and the speed factor.

Because the train speed changes over the path, the speed factor changes from one edge to another. This causes irregular speed curves.

MARECO Allowances

This is exclusively a post-processing step, because it isn’t possible to compute the MARECO envelope without knowing the full train path. When looking for a path, linear allowances are used.

This means that conflicts may appear at this step. To avoid them, the following procedure is applied:

  1. A mareco allowance is applied over the whole path.
  2. If there are conflict, the first one is considered.
  3. The mareco allowance is split in two intervals. The point where the first conflict appeared is set to be at the same time as the envelope with a linear allowance, removing the conflict at this point.
  4. This process is repeated iteratively until no conflict is found.

4.2.3.2.6 - Implementation details

This page is about implementation details. It isn’t necessary to understand general principles, but it helps before reading the code.

STDCMEdgeBuilder

This refers to this class in the project.

This class is used to make it easier to create instances of STDCMEdge, the graph edges. Those contain many attributes, most of which can be determined from the context (e.g. the previous node). The STDCMEdgeBuilder class makes some parameters optional and automatically computes others.

Once instantiated and parametrized, an STDCMEdgeBuilder has two methods:

  • Collection<STDCMEdge> makeAllEdges() can be used to create all the possible edges in the given context for a given route. If there are several “openings” between occupancy blocks, one edge is instantiated for each opening. Every conflict, their avoidance, and their related attributes are handled here.

  • STDCMEdge findEdgeSameNextOccupancy(double timeNextOccupancy): This method is used to get the specific edges that uses a certain opening (when it exists), identified here with the time of the next occupancy block. It is called whenever a new edge must be re-created to replace an old one. It calls the previous method.

Pathfinding

The methods mentioned here are defined in this class.

Cost function

The function used to define pathfinding cost sets which path is used over another. The result is always the one that minimizes this cost (as long as the heuristic is admissible).

Here, two parameters are used: total run time and departure time. The latter has a very small weight compared to the former, so that the fastest path is found. More details are explained in the documentation of those methods.

Heuristics

The algorithm used to find a path is an A*, with a heuristic based on geographical coordinates.

However, the coordinates of generated infrastructures are arbitrary and don’t reflect the track distance. It means that, for the generated infrastructures, the path may not always be the shortest one.

It would be possible to use this heuristic to determine whether the current node can lead to a path that doesn’t take longer than the maximum allowed total run time. But for the same reason, adding this feature would break any STDCM test on generated infras. More details in this issue.

4.2.3.3 - Signaling interface

WIP

There’s a draft of what we intend to do on the French page, but it’s still a work in progress. The implementation hasn’t been started.

4.2.4 - Timetable v2

Describes evolutions to the new timetable and train schedule models

Test

Design decisions

Some major changes were made between our first version of the timetable and the new one:

  • Isolate the timetable table. It can be used in a scenario or in other contexts
  • Have a soft reference from train schedule to rolling stock (to be able to create a train schedule with unknown rolling stock)
  • Consider path and simulation output as cache (that don’t require to be stored in DB)
  • We can compute pathfinding without having to store data
  • All input needed to compute a path is stored in the train schedule (we can recompute it if needed)
  • All input needed to run a simulation is stored in the train schedule (we can recompute it if needed)

Train schedule v2

Requirements

  • front: easy to keep consistent during edition
  • front: intermediate invalid states than can be reached during edition have to be encodable
  • front: when deleting a waypoint that is referenced by margins, the position of the deleted waypoint within the path must be preserved until the situation is resolved
  • import: path waypoint locations can be specified using UIC operational point codes
  • import: support fixed scheduled arrival times at stops and arbitrary points
  • import edition: train schedules must be self-contained: they cannot be described using the result of pathfinding or simulations

Design decisions

Path waypoints have an identity

At some point in the design process, the question was raised of whether to reference location of stops and margin transitions by name, or by value. That is, should stops hold the index of the waypoint where the stop occurs, or a description of the location where the stop occurs?

It was decided to add identifiers to path waypoints, and to reference those identifiers where referencing a path location is needed. This has multiple upsides:

  • you can’t reference a location outside of the path
  • when changing a waypoint’s location, for example from one station’s platform to another, no additional maintenant work is needed to keep the path consistent
  • if a path goes to the same place multiple times, the identifier reference makes it clear which path location is referenced
  • it makes keeping data consistent while editing easier, as all locations are kept in a single place

Invalid train schedules and soft deletes

If an user deletes a waypoint, what happens? Is it the front-end’s responsibility to only save valid schedules, or can invalid schedules be represented in the data model? We decided that it wasn’t just the front-end’s responsibility, as we want to be able to model inconsistent states, until the user comes back to fix it.

One key observation was that don’t want to loose the ability to locate within the path waypoints that were deleted, until all references are gone. How is the front-end supposed to display margin bounds or stops for a waypoint that’s gone, if it’s not there anymore?

We thus decided to add a deleted soft-delete flag to waypoints. When this flag is set, the back-end run simulations on the path, but still allows saving it. Once all references to a deleted waypoint are gone, it can be removed from the path. The backend can deny train schedules with stale deleted waypoints.

Separating path and stops

This decision was hard to make, as there are little factors influencing this decision. Two observations led us to this decision:

  • when deleting a waypoint, the end user may want to preserve the associated stop. Making the separation clear in the data model helps with implementing this behavior correctly, if deemed relevant
  • bundling stops into the path makes it harder to describe what fields path waypoints should have, and what should have a separate object and reference. It was decided that keeping path a simple list of Location, with no strings attached, made things a little clearer.

No more engineering margins?

In the legacy model, we had engineering margins. These margins had the property of being able to overlap. It was also possible to choose the distribution algorithm for each margin individually.

We asked users to comment on the difference and the usefulness of retaining these margins with scheduled points. The answer is that there is no fundamental difference, and that the additional flexibility offered by engineering margins makes no pratical sense (overlap and choice of distribution…).

Arrival times are durations since departure time

  • this allows shifting the departure time without having to change arrival times
  • we don’t have to parse dates and compute date differences within a single trip

We also discussed whether to use seconds or ISO 8601 durations. In the end, ISO 8601 was choosen, despite the simplicity of seconds:

  • it preserves the user’s choice unit for specifying duration
  • it interfaces nicely with the ISO 8601 departure time
  • it does not suffer from potential integer-float serialization related precision loss

Invalid and outdated train schedules

Reasons for a train schedule to be invalid:

  • Inconsitent train schedule (contains deleted waypoint)
  • Rolling stock not found
  • Path waypoint not found
  • The path cannot be found

Reasons for a train schedule to be outdated:

  • The train path changed
  • The train running time changed

What we can do about outdated trains:

  1. Nothing, they’re updated without notification
  2. We can notify the user that a train schedule is outdated:
    • Nothing can be done except acknoledge the change
    • We can not check what changed between the old and new version
    • We can not know the cause of this change (RS, Infra, Algorithms…)

Note: The outdated status is a nice to have feature (it won’t be implemented right now).

Creation fields

These fields are required at creation time, but cannot be changed afterwards. They are returned when the train schedule is queried.

timetable_id: 42

Modifiable fields

train_name: "ABC3615"
rolling_stock_name: R2D2

# labels are metadata. They're only used for display filtering
labels: ["tchou-tchou", "choo-choo"]

# used to select speed limits for simulation
speed_limit_tag: "MA100"

# the start time is an ISO 8601 datetime with timezone. it is not always the
# same at the departure time, as there may be a stop at the starting point
start_time: "2023-12-21T08:51:11.914897+00:00"

path:
 - {id: a, uic: 87210} # Any operational point matching the given uic
 - {id: b, track: foo, offset: 1000} # 10m on track foo
 - {id: c, deleted: true, trigram: ABC} # Any operational point matching the trigram ABC
 - {id: d, operational_point: X} # A specified operational point

# the algorithm used for distributing margins and scheduled times
constraint_distribution: MARECO # or LINEAR

# all durations and times are specified using ISO 8601
# we don't supports months and years duration since it's ambiguous
# times are defined as time elapsed since start. Even if the attribute is omitted,
# a scheduled point at the starting point is infered to have departure=start_time
# the "locked" flag is ignored by the backend.
schedule:
 - {at: a, stop_for: PT5M, locked: true} # infered arrival to be equal to start_time
 - {at: b, arrival: PT10M, stop_for: PT5M}
 - {at: c, stop_for: PT5M}
 - {at: d, arrival: PT50M, locked: true}

margins:
  # This example encodes the following margins:
  #   a --- 5% --- b --- 3% --- d

  # /!\ all schedule points with either an arrival or departure time must also be
  # margin boundaries. departure and arrival waypoints are implicit boundaries. /!\
  # boundaries delimit margin sections. A list of N boundaries yields N + 1 sections.
  boundaries: [b]

  # the following units are supported:
  #  - none is a special value which indicates no margin. It's the default
  #  - % means added percentage of the base simulation time
  #  - min/km means minutes per kilometers
  values: ["5%", "3%"]

# train speed at simulation start, in meters per second.
# must be zero if the train starts at a stop
initial_speed: 2.5

power_restrictions:
 - {from: b, to: c, value: "M1C1"}

comfort: AIR_CONDITIONING # or HEATING, default STANDARD

options:
  # Should we use electrical profiles to select rolling stock speed effort curves
  use_electrical_profiles: true

Combining margins and schedule

Margins and scheduled points are two ways to add time constraints to a train’s schedule. Therefore, there most be a clear set of rules to figure out how these two interfaces interact.

The end goal is to make the target schedule and margins consistent with each other. This is achieved by:

  • computing what the schedule would look like if only margins were applied
  • compare that to the target schedule
  • correct the margin schedule so that it matches the target schedule

The path is partitionned as follows:

  • known time sections span between locations where the arrival time is known. If there are N such locations, there are N - 1 known time sections. In these sections, margins need to be adjusted to match the target schedule.
  • If the arrival time at destination is unknown, the section from the last known arrival time point and the destination is called the relaxed time section has no bound. Margins can be applied directly.

As margins cannot span known time section boundaries, each known time section can be further subdivided into margin sections. Margins cover the entire path.

The end goal is to find the target arrival time at the end of each margin section. This needs to be done while preserving consistency with the input schedule.

Schedule building algorithm

The final schedule is computed as follows:

  • A base simulation is computed, without any time constraint (other than stops). It’s used to compute provisional margin values.
  • Make a provisional time table, which ignores target arrival times but includes provisional margin values.
  • For each known time section, compute the adjustment required to make the provisional schedule match the target schedule.
  • Distribute this difference into the known time section’s margin sections, proportionally to margin section running time. After distributing the adjustment into margin sections, the final schedule should be compatible with the target schedule.

Error handling

Some errors may happen while building the timetable:

  • if a known time section’s required adjustment is negative, a warning must be raised, as margins will have to be lowered
  • if a margin section’s final running time is tighter than the base simulation, it cannot be achieved, and a warning should be raised

Other errors can happen at runtime:

  • target margin values can be too low, as transitions from high density margin to low margin section force the train to loose time after it has exited to high density margin section.
  • target margin values can also be too high, as the train may not have time to slow down enough, or drive so slow as to be unacceptable.

During simulation, if a target arrival time cannot be achieved, the rest of the schedule still stands.

Endpoints

Timetable

POST /v2/timetable
GET /v2/timetable/ # Paginated list timetable
PUT /v2/timetable/ID
DELETE /v2/timetable/ID
GET /v2/timetable/ID # Timetable with list of train schedule ids attached to it

Train Schedule

POST /v2/train_schedule # A batch creation
GET /v2/train_schedule/ID
PUT /v2/train_schedule/ID # Update a specific train schedule
DELETE /v2/train_schedule # A batch deletion

Path

POST /v2/infra/ID/pathfinding/topo # Not required now can be move later
POST /v2/infra/ID/pathfinding/blocks
# takes a pathfinding result and a list of properties to extract
POST /v2/infra/ID/path_properties?props[]=slopes&props[]=gradients&props[]=electrifications&props[]=geometry&props[]=operational_points
GET /v2/train_schedule/ID/path?infra_id=42 # Retrieve the path from a train schedule

Simulation results

# Retrieve the list of conflict of the timetable (invalid trains are ignored)
GET /v2/timetable/ID/conflicts?infra=N
# Retrieve the space, speed and time curve of a given train
GET /v2/train_schedule/ID/simulation?infa=N
# Retrieves simulation information for a given train list. Useful for finding out whether pathfinding/simulation was successful.
GET /v2/train_schedule/simulations_sumary?infa=N&ids[]=X&ids[]=Y
# Projects the space time curves and paths of a number of train schedules onto a given path
POST /v2/train_schedule/project_path?infra=N&ids[]=X&ids[]=Y

4.3 - APIs

Programming interfaces specifications

4.3.1 - JSON Schemas

Infrastructure

Rolling Stock

5 - Railway Wiki

International railway wiki

This wiki is meant to help software engineers have a deep understanding of railway systems.

It can only happen if content is added as needed. If something is missing, contribute!

5.1 - Glossary

Glossary of OSRD and railway vocabulary

Please open an issue if you’re missing a word