How-to guides are recipes. They guide you through the steps involved in addressing key problems and use-cases. They are more advanced than tutorials and assume some knowledge of how OSRD works.
This is the multi-page printable view of this section. Click here to print.
How-to Guides
- 1: Contribute to OSRD
- 1.1: Preamble
- 1.2: License and set-up
- 1.3: Contribute code
- 1.3.1: General principles
- 1.3.2: Back-end conventions
- 1.3.3: Front-end conventions
- 1.3.4: Write code
- 1.3.5: Commit conventions
- 1.3.6: Share your changes
- 1.3.7: Tests
- 1.4: Review process
- 1.5: Report issues
- 1.6: Install docker
- 1.7:
- 2: Deploy OSRD
- 3: Logo
- 4: OSRD's design
- 5: Release
1 - Contribute to OSRD
1.1 - Preamble
First off, thanks for taking the time to contribute!
The following chapters are a set of guidelines for contributing to OSRD. These guidelines are mostly not strict rules, it’s probably fine to do things slightly differently. If you have already contributed to open source projects before, you probably won’t be surprised. If you have not, it will probably help a lot!
Communicate
Chatting with other contributors is a great way to speed things up:
- Create an issue to discuss your contribution project.
Inquire
Just like with any project, changes rely on past work. Before making changes, it is best to learn about what’s already there:
- read technical documentation
- read the existing source code related to your project
- chat with developers who last worked on areas you are interested in
1.2 - License and set-up
License of code contributions
The source code of OSRD is available under the LGPLv3 license. By contributing to the codebase, you consent to the distribution of your changes under the project’s license.
LGPLv3 forbids modifying source code without sharing the changes under the same license: use other people’s work, and share yours!
This constraint does not propagate through APIs: You can use OSRD as a library, framework or API server to interface with proprietary software. Please suggest changes if you need new interfaces.
Set things up
Get the source code
- Install
git
.1 - Open a terminal2 in the folder where the source code of OSRD will be located
- Run
git clone https://github.com/OpenRailAssociation/osrd.git
Launch the application
Docker is a tool which greatly reduces the amount of setup required to work on OSRD:
- download the latest development build:
docker compose pull
- start OSRD:
docker compose up
- build and start OSRD:
docker compose up --build
- review a PR using CI built images:
TAG=pr-XXXXX docker compose up --no-build --pull always
To get started:
- Install
docker
- Follow OSRD’s README.
1.3 - Contribute code
This chapter is about the process of integrating changes into the common code base. If you need help at any stage, open an issue or message us.
OSRD application is split in multiple services written in several languages. We try to follow general code best practices and follow each language specificities when required.
1.3.1 - General principles
- Explain what you’re doing and why.
- Document new code with doc comments.
- Include clear, simple tests.
- Break work into digestible chunks.
- Take the time to pick good names.
- Avoid non well-known abbreviations.
- Control and consistency over 3rd party code reuse: Only add a dependency if it is absolutely necessary.
- Every dependency we add decreases our autonomy and consistency.
- We try to keep PRs bumping dependencies to a low number each week in each component, so grouping
dependency bumps in a batch PR is a valid option (see component’s
README.md
). - Don’t reinvent every wheel: as a counter to the previous point, don’t reinvent everything at all costs.
- If there is a dependency in the ecosystem that is the “de facto” standard, we should heavily consider using it.
- More code general recommendations in main repository CONTRIBUTING.md.
- Ask for any help that you need!
Consult back-end conventions ‣
1.3.2 - Back-end conventions
Python
Python code is used for some packages and integration testing.
- Follow the Zen of Python.
- Code is linted with flake8.
- Code is formatted with Black.
- Imports are sorted with Isort.
- Python tests are written using pytest.
- Typing is checked using pytype.
Rust
- As a reference for our API development we are using the Rust API guidelines. Generally, these should be followed.
- Prefer granular imports over glob imports like
diesel::*
. - Tests are written with the built-in testing framework.
- Use the documentation example to know how to phrase and format your documentation.
- Use consistent comment style:
///
doc comments belong above#[derive(Trait)]
invocations.//
comments should generally go above the line in question, rather than in-line.- Start comments with capital letters. End them with a period if they are sentence-like.
- Use comments to organize long and complex stretches of code that can’t sensibly be refactored into separate functions.
- Code is linted with clippy.
- Code is formatted with fmt.
Java
- Code is formatted with checkstyle.
1.3.3 - Front-end conventions
We use ReactJS and all files must be written in Typescript.
The code is linted with eslint, and formatted with prettier.
Nomenclature
The applications (osrd eex, osrd stdcm, infra editor, rolling-stock editor) offer views (project management, study management, etc.) linked to modules (project, study, etc.) which contain the components.
These views are made up of components and sub-components all derived from the modules. In addition to containing the views files for the applications, they may also contain a scripts directory which offers scripts related to these views. The views determine the logic and access to the store.
Modules are collections of components attached to an object (a scenario, a rolling stock, a TrainSchedule). They contain :
- a components directory hosting all components
- an optional styles directory per module for styling components in scss
- an optional assets directory per module (which contains assets, e.g. default datasets, specific to the module)
- an optional reducers file per module
- an optional types file per module
- an optional consts file per module
An assets directory (containing images and other files).
Last but not least, a common directory offering :
- a utils directory for utility functions common to the entire project
- a types file for types common to the entire project
- a consts file for constants common to the entire project
Implementation principles
Routing & SLUG
In progress
projects/{nom du projet}/studies/{nom de l'étude}/scenarios/{nom du scenario}
Styles & SCSS
WARNING: in CSS/React, the scope of a class does not depend on where the file is imported, but is valid for the entire application. If you import an
scss
file in the depths of a component (which we strongly advise against), its classes will be available to the whole application and may therefore cause side effects.
It is therefore highly recommended to be able to easily follow the tree structure of applications, views, modules and components also within the SCSS code, and in particular to nest class names to avoid edge effects, as the compiler will take care of making the necessary hierarchy.
If, for example, we have a rollingStockSelector
component which proposes a list of rolling stock rollingStockList
represented by rollingStockCard
containing an image representing the rolling stock rollingStockImg
we should have the following SCSS structure:
.rollinStockSelector {
.rollingStockList {
.rollingStockCard {
.rollingStockImg {
width: 5rem;
height: auto;
}
}
}
}
This ensures that the image contained in the rolling stock card inherits the correct css properties .rollinStockSelector.rollingStockList.rollingStockCard.rollingStockImg
.
CSS Modules
CSS modules allow scoping CSS styles to a specific component, thereby avoiding conflicts with global class names.
Vite natively supports CSS modules. Ensure that your CSS file has the .module.css
extension, for example, styles.module.css
.
Using CSS Modules in Components
- Create an SCSS file with the
.module.scss
extension:
/* MyComponent.module.scss */
.container {
background-color: white;
}
.title {
font-size: 24px;
color: #333;
}
- Use the classes in your React component:
Vite transforms classes into objects that contain hashed classes (e.g., _container_h3d8bg
) and uses them during bundle generation, making the classes unique.
import React from "react";
import styles from "./MyComponent.module.scss";
export function MyComponent() {
return (
<div className={styles.container}>
<h1 className={styles["title"]}>My Title</h1>
</div>
);
}
For more information, you can refer to the Vite.js documentation.
Class names, using cx()
.
Classes are normally added one after the other, in the className=""
property.
However, when necessary - class usage tests, concatenation, etc. - we use the classnames library, which recommends the following usage:
<div className="rollingStockSelector">
<div className="rollingStockList">
<div className="rollingStockCard w-100 my-2">
<img
className={cx("rollingStockImg", "m-2", "p-1", "bg-white", {
valid: isValid(),
selected: rollingStockID === selectedRollingStockID,
})}
/>
</div>
</div>
</div>
Classes are separated each in a string
and Boolean or other operations are performed in an object that will return - or not - the property name as the class name to be used in CSS.
Store/Redux
Everything that is selector is managed by the view and passed as props to components and sub-components.
Consequently, read and write calls to the store must be made at view level, irrigating the components proposed by the view with props and states.
RTK
Use generated endpoints from openapi.yaml
files to consume the backend.
Operation of RTK Query cache
When the data is retrieved from the back, RTK is caching it into the store. If the same endpoint is called again with same parameters, RTK will use the cache data instead of making a new call to the back.
In the store, you will see the editoastApi
key containing the cached data of all editoast endpoints:
Here for example, the getProjects
endpoint is called.
RTK stores the endpoint’s name, as well as the call’s parameters, to form an unique key nomDuEndpoint({ paramètres })
. (here getProjects({"ordering":"LastModifiedDesc","pageSize":1000})
).
{
'getProjectsByProjectIdStudiesAndStudyId({"projectId":13,"studyId":16})': {
status :"fulfilled",
etc…
},
'getProjectsByProjectIdStudiesAndStudyId({"projectId":13,"studyId":14})': {
…
}
}
In this second example, the same endpoint has been called with the ssame projectId
parameter, but a different studyId
parameter.
Serialization of keys in the cache
The strings used as keys in the cache are essentially the parameter object passed through the JSON.stringify
function, which converts a JS object into a string (thus serialized).
Normally, serialization does not preserve the order of object keys. For example, JSON.stringify
will not produce the same string with these two objects: { a: 1, b: 2 }
and { b: 2, a: 1 }
.
RTK will optimize caching by ensuring that the result of a call with {"projectId":13,"studyId":16}
or {"studyId":16, "projectId":13}
is stored under the same key in the cache.
To see the detailed operation, here is the code for this serialization function:
RTK Serialization Function
const defaultSerializeQueryArgs: SerializeQueryArgs<any> = ({
endpointName,
queryArgs,
}) => {
let serialized = ''
const cached = cache?.get(queryArgs)
if (typeof cached === 'string') {
serialized = cached
} else {
const stringified = JSON.stringify(queryArgs, (key, value) =>
isPlainObject(value)
? Object.keys(value)
.sort() // keys are reordered here
.reduce<any>((acc, key) => {
acc[key] = (value as any)[key]
return acc
}, {})
: value
)
if (isPlainObject(queryArgs)) {
cache?.set(queryArgs, stringified)
}
serialized = stringified
}
// Sort the object keys before stringifying, to prevent useQuery({ a: 1, b: 2 }) having a different cache key than useQuery({ b: 2, a: 1 })
return `${endpointName}(${serialized})`
}
Data subscription
In RTK Query terminology, when a React component calls an endpoint defined in RTK Query, it subscribes to the data.
RTK counts the number of references to the same pair (endpoint, {parameters}). When two components subscribe to the same data, they share the same key in the cache.
import { osrdEditoastApi } from "./api.ts";
function Component1() {
// component subscribes to the data
const { data } = osrdEditoastApi.useGetXQuery(1);
return <div>...</div>;
}
function Component2() {
// component subscribes to the data
const { data } = osrdEditoastApi.useGetXQuery(2);
return <div>...</div>;
}
function Component3() {
// component subscribes to the data
const { data } = osrdEditoastApi.useGetXQuery(3);
return <div>...</div>;
}
function Component4() {
// component subscribes to the *same* data as ComponentThree,
// as it has the same query parameters
const { data } = osrdEditoastApi.useGetXQuery(3);
return <div>...</div>;
}
Here, Component3
and Component4
will generate only one call to the backend. They subscribe to the same data (same endpoint and same parameter 3
). They will share the same key in the cache.
In total, there will be three calls to the backend here, with parameters 1
, 2
, and 3
.
As long as at least one mounted React component calls the osrdEditoastApi.endpoints.getProjectsByProjectId.useQuery
hook, for example, the data will be retained in the cache.
Once the last component is unmounted, the data is removed from the cache after 60 seconds (default value).
Translation
Application translation is performed on Transifex. The default language is French. If you add a new translation key, it can be added directly to the code, in all available languages. Please note that if you need to correct a translation, we recommend that you use Transifex, to avoid any conflict.
Rules and important elements
No component should be responsible for updating the data it uses
Only views contain the store selectors, which are then given as props to the components of the module linked to the view.
SCSS is not scoped
A .scss
file buried in the tree structure doesn’t guarantee that the classes it contains can only be accessed there, even by importing react (formally forbidden by the way: you must use SCSS import), all declared classes are accessible everywhere.
Prefer a judicious choice of root class name for a given module, and use the tree structure available in the SCSS file.
Imports must follow a specific order
ESLint is setup to automatically sort imports in four import groups, each of them sorted in alphabetical order :
- React
- External libraries
- Internal absolute path files
- Internal relative path files
Each of these groups will be separated by an empty line.
ESLint will trigger a warning if you don’t follow these guidelines.
Import links must be absolute
You must use the full path for all your imports.
Import links can be relative only if the file to be imported is in the same directory.
TypeScript
import & export
ESLint and Typescript are setup to enforce typed imports for an exported type.
This current setup allows to :
- Auto typing the import when using a type in a file with autocompletion.
- Getting 2 errors from each package asking to use type import if you didn’t.
When an import
or export
contains only types, indicate it with the type
keyword.
export type { Direction, DirectionalTrackRange as TrackRange };
import type { typedEntries, ValueOf } from "utils/types";
When an import
contains not only types, it will be structured like below, in alphabetical order.
import {
osrdEditoastApi,
type ScenarioCreateForm,
} from "common/api/osrdEditoastApi";
This allows to:
- Improve the performance and analysis process of the compiler and the linter.
- Make these declarations more readable; we can clearly see what we are importing.
- Avoid dependency cycles:
The error disappears with the type
keyword
- Make final bundle lighter (all types disappear at compilation)
1.3.4 - Write code
If you are not used to Git, follow this tutorial
Create a branch
If you intend to contribute regularly, you can request access to the main repository. Otherwise, create a fork.Add changes to your branch
Before you start working, try to split your work into macroscopic steps. At the end of each stop, save your changes into a commit. Try to make commits of logical and atomic units. Try to follow style conventions.Keep your branch up-to-date
git switch <your_branch> git fetch git rebase origin/dev
1.3.5 - Commit conventions
Commit style
The overall format for git commits is as follows:
component1, component2: imperative description of the change
Detailed or technical description of the change and what motivates it,
if it is not entirely obvious from the title.
- the commit message, just like the code, must be in english (only ASCII characters for the title)
- there can be multiple components separated by
:
in case of hierarchical relationships, with,
otherwise - components are lower-case, using
-
,_
or.
if necessary - the imperative description of the change begins with a lower-case verb
- the title must not contain any link (
#
is forbidden)
Ideally:
- the title should be self-explanatory: no need to read anything else to understand it
- the commit title is all lower-case
- the title is clear to a reader not familiar with the code
- the body of the commit contains a detailled description of the change
Counter-examples of commit titles
To be avoided entirely:
component: update ./some/file.ext
: specify the update itself rather than the file, the files are technical elements welcome in the body of the commitcomponent: fix #42
: specify the problem fixed in the title, links (to issue, etc.) are very welcome in commit’s bodywip
: describe the work (and finish it)
Welcome to ease review, but do not merge:
fixup! previous commit
: an autosquash must be run before the mergeRevert "previous commit of the same PR"
: both commits must be dropped before merging
The Developer Certificate of Origin (DCO)
All of OSRD’s projects use the DCO (Developer Certificate of Origin) to address legal matters. The DCO helps confirm that you have the rights to the code you contribute. For more on the history and purpose of the DCO, you can read The Developer Certificate of Origin by Roscoe A. Bartlett.
To comply with the DCO, all commits must include a Signed-off-by line.
How to sign a commit using git in a shell ?
To sign off a commit, simply add the -s
flags to your git commit
command,
like so:
git commit -s -m "Your commit message"
This also applies when using the git revert
command.
How to do sign a commit using git in Visual Studio Code (VS Code) ?
Now, go in Files
-> Preferences
-> Settings
, search for and activate
the Always Sign Off setting.
Finally, when you’ll commit your changes via the VS Code interface, your commits will automaticaly be signed-off.
1.3.6 - Share your changes
The author of a pull request (PR) is responsible for its “life cycle”. He is responsible for contacting the various parties involved, following the review, responding to comments and correcting the code following review (you could also check dedicated page about code review).
In the case of a large PR, don’t hesitate to ask several reviewers to organize themselves, or even to carry out the review together, reviewers and author.
Open a pull request
Once your changes are ready, you have to request integration with thedev
branch.If possible:
- Make PR of logical and atomic units too (avoid mixing refactoring, new features and bug fix at the same time).
- Add a description to PRs to explain what they do and why.
- Help the reviewer by following advice given in mtlynch article.
- Add tags
area:<affected_area>
to show which part of the application have been impacted. It can be done through the web interface.
Take feedback into account
Once your PR is open, other contributors can review your changes:- Any user can review your changes.
- Your code has to be approved by a contributor familiar with the code.
- All users are expected to take comments into account.
- Comments tend to be written in an open and direct manner. The intent is to efficiently collaborate towards a solution we all agree on.
- Once all discussions are resolved, a maintainer integrates the change.
For large PRs that are bound to evolve over time, keeping corrections during review in separate commits helps reviewers. In the case of multiple reviews by the same person, this can save full re-review (ask for help if necessary):
- Add fixup, amend, squash or reword commits using the git commit documentation.
- Automatically merge corrections into the original commits of your PR and check the result, using
git rebase -i --autosquash origin/dev
(just before the merge and once review process is complete).- Push your changes with
git push --force-with-lease
because you are not just pushing new commits, you are pushing changes to existing commits.
- If you believe somebody forgot to review / merge your change, please speak out, multiple times if needs be.
Review cycle
A code review is an iterative process. For a smooth review, it is imperative to correctly configure your github notifications.
It is advisable to configure OSRD repositories as “Participating and @mentions”. This allows you to be notified of activities only on issues and PRs in which you participate.
Maintainers are automatically notified by the
CODEOWNERS
system. The author of a PR is responsible for advancing their PR through the review process and manually requesting maintainer feedback if necessary.
sequenceDiagram actor A as PR author actor R as Reviewer/Maintainer A->>R: Asks for a review, notifying some people R->>A: Answers yes or no loop Loop between author and reviewer R-->>A: Comments, asks for changes A-->>R: Answers to comments or requested changes A-->>R: Makes necessary changes in dedicated "fixups" R-->>A: Reviews, tests changes, and comments again R-->>A: Resolves requested changes/conversations if ok end A->>R: Rebase and apply fixups R->>A: Checks commits history R->>A: Approves or closes the PR Note left of R: & Merges if maintainer
1.3.7 - Tests
Back-end
- Integration tests are written with pytest in the
/tests
folder. - Each route described in the
openapi.yaml
files must have an integration test. - The test must check both the format and content of valid and invalid responses.
Front-end
The functional writing of the tests is carried out with the Product Owners, and the developers choose a technical implementation that precisely meets the needs expressed and fits in with the recommendations presented here.
We use Playwright to write end-to-end tests, and vitest to write unit tests.
The browsers tested are currently Firefox and Chromium.
Basic principles
- Tests must be short (1min max) and go straight to the point.
- Arbitrary timeouts are outlawed; a test must systematically wait for a specific event. It is possible to use polling (retry an action - a click for example - after a certain time) proposed in the Playwright’s API.
- All tests must be parallelizable.
- Tests must not point to or wait for text elements from the translation, prefer the DOM tree structure or place specific
id
. - We’re not testing the data, but the application and its functionality. Data-specific tests should be developed in parallel.
Data
The data tested must be public data.
The data required (infrastructure and rolling stock) for the tests are offered in the application’s json
files, injected at the start of each test and deleted at the end, regardless of its result or how it is stopped, including with CTRL+C
.
This is done by API calls in typescript before launching the actual test.
The data tested is the same, both locally and via continuous integration.
Atomicity of a test
Each test must be atomic: it is self-sufficient and cannot be divided.
A test will target a single feature or component, provided it is not too large. A test will not test an entire module or application; it will necessarily be a set of tests, in order to preserve test atomicity.
If a test needs elements to be created or added, these operations must be carried out by API calls in typescript upstream of the test, as is done for adding data. These elements must be deleted at the end of the test, regardless of the result or how it is stopped, including by CTRL+C
.
This allows tests to be parallelized.
However, in certain cases where it is relevant, a test may contain several clearly explained and justified test subdivisions (several test()
in a single describe()
).
Example of a test
The requirement: “We want to test the addition of a train to a timetable”.
- add the test infrastructure and rolling stock to the database by API calls.
- create project, study and scenario with choice of test infrastructure by API calls.
- start the test, clicking on “add one or more trains” until the presence of the trains in the timetable is verified
- the test passes, fails or is stopped, the project, study and scenario are deleted, along with the test rolling stock and infrastructure by API calls.
NB: the test will not test all the possibilities offered by the addition of trains; this should be a specific test which would test the response of the interface for all scenarios without adding trains.
1.4 - Review process
The reviewer/maintainer undertakes to carry out the review quickly, and is also responsible for closing request changes, check commit history and quickly merge the pull request if allowed.
We propose you a few tips and recommendations that we think are relevant to a human, relevant and rewarding code review for all contributors:
- How to Make Your Code Reviewer Fall in Love with You? by Michael Lynch.
- How to Do Code Reviews Like a Human? by Michael Lynch.
Review cycle
A code review is an iterative process. For a smooth review, it is imperative to correctly configure your github notifications.
It is advisable to configure OSRD repositories as “Participating and @mentions”. This allows you to be notified of activities only on issues and PRs in which you participate.
Maintainers are automatically notified by the
CODEOWNERS
system. The author of a PR is responsible for advancing their PR through the review process and manually requesting maintainer feedback if necessary.
sequenceDiagram actor A as PR author actor R as Reviewer/Maintainer A->>R: Asks for a review, notifying some people R->>A: Answers yes or no loop Loop between author and reviewer R-->>A: Comments, asks for changes A-->>R: Answers to comments or requested changes A-->>R: Makes necessary changes in dedicated "fixups" R-->>A: Reviews, tests changes, and comments again R-->>A: Resolves requested changes/conversations if ok end A->>R: Rebase and apply fixups R->>A: Checks commits history R->>A: Approves or closes the PR Note left of R: & Merges if maintainer
The code review pyramid
Script for testing a PR
When reviewing a PR, it is useful to test the changes by starting an instance of the OSRD app based on the PR branch.
A script is available to spin up a separate and dedicated app instance using the PR number. The script uses the Docker images already built by the CI and launches the app, running in isolation. This allows you to run both instances simultaneously without conflicts (ideal for comparing changes, for example).
Additionally, you can specify a database backup, which the script will load directly into the app.
The app will be launched on the 4001 port. You can access it at: http://localhost:4001/
Available Commands:
./scripts/pr-tests-compose.sh 8914 up
: Downloads the CI-generated images for PR #8914 and launches the application../scripts/pr-tests-compose.sh 8914 up-and-load-backup ./path_to_backup
: Downloads the images for PR #8914, restores data from the provided backup, and starts the application../scripts/pr-tests-compose.sh down
: Shuts down the test application instance for PR #8914../scripts/pr-tests-compose.sh down-and-clean
: Shuts down the test instance and cleans all the instance’s docker volumes (PG data, Valkey cache, RabbitMQ) to prevent any side-effects.
Accessing Services:
Apart from the frontend
server, all localhost services are available on localhost, with a minor port adjustment (to avoid conflicts with the dev environment): for a list of common ports, have a look at the dedicated docker-compose file.
1.5 - Report issues
Please report anything you deem significant!
Our bug tracking platform is github, so you have to register to report bugs.
Follow this link and pick whatever template fits the best.
Bugs
- Bug must have a correct description and the bug’s issue template must be filled carefully.
- Bug must be tagged with (for team members):
kind:bug
- one or several
area:<affected_area>
if possible, if the affected area is not known leave it blank it will be added later by another team member. - one
severity:<bug_severity>
if possible, if severity is not known leave it blank it will be added later by another team member.severity:minor
: User can still use the feature.severity:major
: User sometimes can’t use the feature.severity:critical
: User can’t use the feature.
- OSRD team members can change issues’ tags (severity, area, kind, …). You may leave a comment to explain changes.
- If you are working on a bug or plan to work on a bug, assign yourself to the bug.
- PRs solving bugs should add a regression tests to ensure that bug will not be back in the future.
1.6 - Install docker
Regardless of your operating system, docker requires linux to operate. When used on a different operating system, docker relies on virtual machines to build and run images.
There are two main types of docker installations:
- docker engine is the usual docker command line application
- docker desktop is a GUI app that also manages virtualization
Here’s what we suggest:
- If you’re on linux, install docker engine using your package manager
- If you’re on MacOS / Windows, install docker desktop if you are allowed to
- If you’re on windows and want to get docker running within WSL, or can’t use docker desktop, follow the docker on WSL tutorial
- If you’re on MacOS and can’t use docker desktop, follow the MacOS colima tutorial
Docker on WSL
This install option is very useful, as it allows having a perfectly normal linux install of docker engine inside WSL, which can still be reached from windows.
- Install WSL (If you had an old version of WSL, run
wsl --upgrade
) - Get an operating system image from the microsoft store (for example, debian or ubuntu)
- Enable systemd support within the WSL VM
- Follow the regular linux install tutorial for docker
- If you have docker desktop installed, you can configure it to use WSL
MacOS colima
This procedure allows installing docker without relying on docker desktop. It uses colima for virtualizing linux.
- Install homebrew
brew install docker docker-compose colima
- Install the compose plugin:
mkdir -p ~/.docker/cli-plugins && ln -sfn $(brew --prefix)/opt/docker-compose/bin/docker-compose ~/.docker/cli-plugins/docker-compose
- Configure colima:
- for apple silicon (M1/M2) macbooks:
colima start --cpu 2 --memory 6 --arch aarch64 --vm-type=vz --vz-rosetta --mount-type=virtiofs
- for small infrastructures:
colima start --cpu 2 --memory 4
- for big infrastructures:
colima start --cpu 2 --memory 6
brew services start colima
to automatically start colima on startup- Exit your terminal, open a new one
- You can now use docker CLI
If you’re using rancher desktop, please either:
- uninstall the application
- select
Manual
inPreferences
>Application
>Environement
colima delete
and try again (the disk format is not compatible). Settings will be lost.If you get this error: error getting credentials - err: exec: "docker-credential-osxkeychain": executable file not found in $PATH
Open ~/.docker/config.json
, and remove "credsStore": "osxkeychain"
1.7 -
Review cycle
A code review is an iterative process. For a smooth review, it is imperative to correctly configure your github notifications.
It is advisable to configure OSRD repositories as “Participating and @mentions”. This allows you to be notified of activities only on issues and PRs in which you participate.
Maintainers are automatically notified by the
CODEOWNERS
system. The author of a PR is responsible for advancing their PR through the review process and manually requesting maintainer feedback if necessary.
sequenceDiagram actor A as PR author actor R as Reviewer/Maintainer A->>R: Asks for a review, notifying some people R->>A: Answers yes or no loop Loop between author and reviewer R-->>A: Comments, asks for changes A-->>R: Answers to comments or requested changes A-->>R: Makes necessary changes in dedicated "fixups" R-->>A: Reviews, tests changes, and comments again R-->>A: Resolves requested changes/conversations if ok end A->>R: Rebase and apply fixups R->>A: Checks commits history R->>A: Approves or closes the PR Note left of R: & Merges if maintainer
2 - Deploy OSRD
First of all, we recommend learning about the containers architecture of OSRD.
We will cover how to deploy OSRD within the following setups:
- Using docker compose on a single node.
- Using helm on a kubernetes cluster.
It is also possible to deploy each service of OSRD manually on a system, but we will not cover this topic within this guide.
NB
In order for the STDCM tool to function, you’ll need to setup the STDCM Search Environment, a configuration stored in database. See the dedicated page for more information.2.1 - Docker Compose
The OSRD project includes a docker-compose.yml
file designed to facilitate the deployment of a fully functional OSRD environment.
Only intended for development purposes, this Docker Compose configuration could be adapted for quick, single-node deployments.
Disclaimer
This setup is designed for development only. For example no authentication is supported and the front-end is served in development mode (rebuilt on the fly). If you mean to deploy a production ready version of OSRD, please follow the Kubernetes-based deploymentPrerequisites
Before proceeding with the deployment, ensure that you have the following installed:
- Docker
- Docker Compose
Configuration Overview
The docker-compose.yml
file defines the following services:
- PostgreSQL: A PostgreSQL database with PostGIS extension.
- Valkey: A Valkey server for caching.
- Core: The core OSRD service.
- Front: The front-end service for OSRD.
- Editoast: A OSRD service responsible for various editorial functions.
- Gateway: Serves as the gateway for the OSRD services.
- Wait-Healthy: A utility service to ensure all services are healthy before proceeding.
Each service is configured with health checks, volume mounts and necessary environment variables.
Deployment Steps
- Clone the Repository: First, clone the OSRD repository to your local machine.
- Configuration: The default configuration requires setting an environment variable for the Editoast service:
ROOT_URL
. It should be set to the URL pointing to the Editoast service through the gateway. For example, “http://your-domain.com/api". You can also adjust other environment variables if needed. - Build and Run: Navigate to the directory containing
docker-compose.yml
and run:
docker-compose up --build
This command builds the images and starts the services defined in the Docker Compose file.
Accessing Services
While all HTTP service are used through the gateway (http://localhost:4000
), you can access directly each service using their exposed ports:
- PostgreSQL: Accessible on
localhost:5432
. - Valkey: Accessible on
localhost:6379
. - Core Service: Accessible on
localhost:8080
. - Front-End: Accessible on
localhost:3000
. - Editoast: Accessible on
localhost:8090
.
Notes and Considerations
- This setup is designed for development and quick deployments. For production environments, additional considerations for security, scalability and reliability should be addressed.
- Ensure that the
POSTGRES_PASSWORD
and other sensitive credentials are securely managed, especially in production deployments.
2.2 - Kubernetes with Helm
The OSRD project’s Helm Chart provides a flexible and efficient way to deploy OSRD services in a Kubernetes environment. This document outlines the configuration options available in the Helm Chart, focusing on each service component.
Prerequisites
Before proceeding with the deployment, ensure that you have the following installed:
- A Kubernetes cluster up and running
- A PostgreSQL database with PostGIS
- A Valkey server (used for caching)
The tileserver
Tileserver is the component responsible for generating vector map tiles. It is recommended to separate it from standard Editoast while running a production setup since Editoast cannot be scaled horizontally (it is stateful).
You can visualize the recommended deployment here:
flowchart TD gw["gateway"] front["front-end static files"] gw -- local file --> front browser --> gw gw -- HTTP --> editoast gw -- HTTP --> tileserver-1 gw -- HTTP --> tileserver-2 gw -- HTTP --> tileserver-n... editoast -- HTTP --> core
The Helm chart leverages Kubernete’s HorizontalPodAutoscaler in order to spawn as much tileserver as required for the current workload.
Chart Values Overview
The Helm Chart is configurable through the following values:
Core Service
core
: Configuration for the core OSRD service.internalUrl
: Internal URL for service communication.image
: Docker image to use.pullPolicy
: Image pull policy.replicaCount
: Number of replicas.service
: Service type and port configuration.resources
,env
,annotations
,labels
,nodeSelector
,tolerations
,affinity
: Various Kubernetes deployment options.
Editoast Service
editoast
: Configuration for the Editoast service.- Includes similar options as
core
for Kubernetes deployment. init
: Initialization configuration.
- Includes similar options as
Tile Server
tileServer
: Specialized Editoast service that serves only vector map tiles.enabled
: Set totrue
to enable tile server functionality.image
: Docker image to use (typically the same as Editoast).replicaCount
: Number of replicas, allowing for horizontal scaling.hpa
: Horizontal Pod Autoscaler configuration.- Other standard Kubernetes deployment options.
Gateway
gateway
: Configuration for the OSRD gateway.- Includes service, ingress, and other Kubernetes deployment options.
config
: Specific configurations for authentication and trusted proxies.
Deployment
The chart is available at ghcr OCI repository. You can find 2 Helm charts:
- Stable charts:
oci://ghcr.io/OpenRailAssociation/charts/osrd
- Dev charts:
oci://ghcr.io/OpenRailAssociation/charts/osrd-dev
To deploy the OSRD services using this Helm Chart:
Configure Values: Adjust the values in the Helm Chart to suit your deployment needs.
Install Chart: Use Helm to install the chart into your Kubernetes cluster.
helm install osrd oci://ghcr.io/OpenRailAssociation/charts/osrd -f values.yml
2.3 - STDCM search environment configuration
In order for the STDCM tool to function, you’ll need to setup the STDCM Search Environment, a configuration stored in database.
The configurable fields are as such:
pub struct StdcmSearchEnvironment {
pub infra_id: i64,
pub electrical_profile_set_id: Option<i64>,
pub work_schedule_group_id: Option<i64>,
pub timetable_id: i64,
pub search_window_begin: NaiveDateTime,
pub search_window_end: NaiveDateTime,
}
This configuration is queried by the frontend. That way, the right objects and time bounds are used transparently by the user.
In order to setup this config, you can either
- Use the provided REST API (see the editoast openAPI in the stdcm_search_environment section)
- Use the provided editoast cli (run
editoast stdcm-search-env help
for more information)
3 - Logo
You can download each logo independently by clicking directly on it, or all the logos compressed into a zip file.
It is advisable to carefully choose the logo you want to use, depending on the background on which you want to display it.
Modification, addition or deletion of the shading other than as presented in the logos are not authorised (this applies more generally throughout the design, the choice to use drop shadows is part of the design considerations, it is not a variable element).
Official
Official for dark backgrounds
White
Black
Favicons, logo without text
🚫 What you can’t do
Too small (< 16px height)
Disproportion
Change the text colour or drop shadow
Changing direction
Deformation
✅ What you can do
Changing the internal colour for a specific event
Use of logo only (without text)
Colors
These colours are those of the logo and should not be confused with those of the overall design of the OSRD interface.
#786ABF #C7B2DE
4 - OSRD's design
Everything is presented on a dedicated website https://design.osrd.fr
A “design system” is being developed.
5 - Release
All OSRD releases are accessible here
The process for creating a new release is as follows:
- We always release on a tested version of the application (staging branch)
git switch staging && git pull
- Create a git annotated tag
- We are using the semantic versioning
git tag -a vx.y.z
with the messageRelease x.y.z
(most of the time use the latest version and increment the patch version)git push --tags
- Create a github release
- Draft a new github release here
- Select the created tag
- Generate the releases notes
- Rename the release like so: “Version x.y.z”
- Check the “Set as a pre-release” box
- Apply the changelog format
- Then you can publish the release or save the draft if you want to come back later
- A github action should be triggered automatically.
- Post the link of the created release on matrix. Suggest that the developers review the release.
Changelog format
- Use the following structure:
## What's Changed
### Features :tada:
### Code refactoring :recycle:
### Bug fixes :bug:
## New Contributors
<!-- Copy from the generated release notes -->
...
<!-- Copy from the generated release notes -->
**Full Changelog**: ...
- Partition the different pull requests
- Merge or group PR when it make sense. Examples:
- Bump of dependencies PR (merge)
- Multi part PR (merge)
- One big feature implemented by multiple PR (group)
- Reword PR title. It should be comprehensible to an external collaborator