3 - Front-end conventions
Coding style guide and best practices for front-end
We use ReactJS and all files must be written in Typescript.
The code is linted with eslint, and formatted with prettier.
Nomenclature
The applications (osrd eex, osrd stdcm, infra editor, rolling-stock editor) offer views (project management, study management, etc.) linked to modules (project, study, etc.) which contain the components.
These views are made up of components and sub-components all derived from the modules.
In addition to containing the views files for the applications, they may also contain a scripts directory which offers scripts related to these views. The views determine the logic and access to the store.
Modules are collections of components attached to an object (a scenario, a rolling stock, a TrainSchedule). They contain :
- a components directory hosting all components
- an optional styles directory per module for styling components in scss
- an optional assets directory per module (which contains assets, e.g. default datasets, specific to the module)
- an optional reducers file per module
- an optional types file per module
- an optional consts file per module
An assets directory (containing images and other files).
Last but not least, a common directory offering :
- a utils directory for utility functions common to the entire project
- a types file for types common to the entire project
- a consts file for constants common to the entire project
Implementation principles
Routing & SLUG
In progress
projects/{nom du projet}/studies/{nom de l'étude}/scenarios/{nom du scenario}
Styles & SCSS
WARNING: in CSS/React, the scope of a class does not depend on where the file is imported, but is valid for the entire application. If you import an scss
file in the depths of a component (which we strongly advise against), its classes will be available to the whole application and may therefore cause side effects.
It is therefore highly recommended to be able to easily follow the tree structure of applications, views, modules and components also within the SCSS code, and in particular to nest class names to avoid edge effects, as the compiler will take care of making the necessary hierarchy.
If, for example, we have a rollingStockSelector
component which proposes a list of rolling stock rollingStockList
represented by rollingStockCard
containing an image representing the rolling stock rollingStockImg
we should have the following SCSS structure:
.rollinStockSelector {
.rollingStockList {
.rollingStockCard {
.rollingStockImg {
width: 5rem;
height: auto;
}
}
}
}
This ensures that the image contained in the rolling stock card inherits the correct css properties .rollinStockSelector.rollingStockList.rollingStockCard.rollingStockImg
.
CSS Modules
CSS modules allow scoping CSS styles to a specific component, thereby avoiding conflicts with global class names.
Vite natively supports CSS modules. Ensure that your CSS file has the .module.css
extension, for example, styles.module.css
.
Using CSS Modules in Components
- Create an SCSS file with the
.module.scss
extension:
/* MyComponent.module.scss */
.container {
background-color: white;
}
.title {
font-size: 24px;
color: #333;
}
- Use the classes in your React component:
Vite transforms classes into objects that contain hashed classes (e.g., _container_h3d8bg
) and uses them during bundle generation, making the classes unique.
import React from "react";
import styles from "./MyComponent.module.scss";
export function MyComponent() {
return (
<div className={styles.container}>
<h1 className={styles["title"]}>My Title</h1>
</div>
);
}
For more information, you can refer to the Vite.js documentation.
Class names, using cx()
.
Classes are normally added one after the other, in the className=""
property.
However, when necessary - class usage tests, concatenation, etc. - we use the classnames library, which recommends the following usage:
<div className="rollingStockSelector">
<div className="rollingStockList">
<div className="rollingStockCard w-100 my-2">
<img
className={cx("rollingStockImg", "m-2", "p-1", "bg-white", {
valid: isValid(),
selected: rollingStockID === selectedRollingStockID,
})}
/>
</div>
</div>
</div>
Classes are separated each in a string
and Boolean or other operations are performed in an object that will return - or not - the property name as the class name to be used in CSS.
Store/Redux
Everything that is selector is managed by the view and passed as props to components and sub-components.
Consequently, read and write calls to the store must be made at view level, irrigating the components proposed by the view with props and states.
RTK
Use generated endpoints from openapi.yaml
files to consume the backend.
Operation of RTK Query cache
When the data is retrieved from the back, RTK is caching it into the store. If the same endpoint is called again with same parameters, RTK will use the cache data instead of making a new call to the back.
In the store, you will see the editoastApi
key containing the cached data of all editoast endpoints:
Here for example, the getProjects
endpoint is called.
RTK stores the endpoint’s name, as well as the call’s parameters, to form an unique key nomDuEndpoint({ paramètres })
. (here getProjects({"ordering":"LastModifiedDesc","pageSize":1000})
).
{
'getProjectsByProjectIdStudiesAndStudyId({"projectId":13,"studyId":16})': {
status :"fulfilled",
etc…
},
'getProjectsByProjectIdStudiesAndStudyId({"projectId":13,"studyId":14})': {
…
}
}
In this second example, the same endpoint has been called with the ssame projectId
parameter, but a different studyId
parameter.
Serialization of keys in the cache
The strings used as keys in the cache are essentially the parameter object passed through the JSON.stringify
function, which converts a JS object into a string (thus serialized).
Normally, serialization does not preserve the order of object keys. For example, JSON.stringify
will not produce the same string with these two objects: { a: 1, b: 2 }
and { b: 2, a: 1 }
.
RTK will optimize caching by ensuring that the result of a call with {"projectId":13,"studyId":16}
or {"studyId":16, "projectId":13}
is stored under the same key in the cache.
To see the detailed operation, here is the code for this serialization function:
RTK Serialization Function
const defaultSerializeQueryArgs: SerializeQueryArgs<any> = ({
endpointName,
queryArgs,
}) => {
let serialized = ''
const cached = cache?.get(queryArgs)
if (typeof cached === 'string') {
serialized = cached
} else {
const stringified = JSON.stringify(queryArgs, (key, value) =>
isPlainObject(value)
? Object.keys(value)
.sort() // keys are reordered here
.reduce<any>((acc, key) => {
acc[key] = (value as any)[key]
return acc
}, {})
: value
)
if (isPlainObject(queryArgs)) {
cache?.set(queryArgs, stringified)
}
serialized = stringified
}
// Sort the object keys before stringifying, to prevent useQuery({ a: 1, b: 2 }) having a different cache key than useQuery({ b: 2, a: 1 })
return `${endpointName}(${serialized})`
}
Data subscription
In RTK Query terminology, when a React component calls an endpoint defined in RTK Query, it subscribes to the data.
RTK counts the number of references to the same pair (endpoint, {parameters}). When two components subscribe to the same data, they share the same key in the cache.
import { osrdEditoastApi } from "./api.ts";
function Component1() {
// component subscribes to the data
const { data } = osrdEditoastApi.useGetXQuery(1);
return <div>...</div>;
}
function Component2() {
// component subscribes to the data
const { data } = osrdEditoastApi.useGetXQuery(2);
return <div>...</div>;
}
function Component3() {
// component subscribes to the data
const { data } = osrdEditoastApi.useGetXQuery(3);
return <div>...</div>;
}
function Component4() {
// component subscribes to the *same* data as ComponentThree,
// as it has the same query parameters
const { data } = osrdEditoastApi.useGetXQuery(3);
return <div>...</div>;
}
Here, Component3
and Component4
will generate only one call to the backend. They subscribe to the same data (same endpoint and same parameter 3
). They will share the same key in the cache.
In total, there will be three calls to the backend here, with parameters 1
, 2
, and 3
.
As long as at least one mounted React component calls the osrdEditoastApi.endpoints.getProjectsByProjectId.useQuery
hook, for example, the data will be retained in the cache.
Once the last component is unmounted, the data is removed from the cache after 60 seconds (default value).
Translation
Application translation is performed on Transifex. The default language is French.
If you add a new translation key, it can be added directly to the code, in all available languages. Please note that if you need to correct a translation, we recommend that you use Transifex, to avoid any conflict.
Rules and important elements
No component should be responsible for updating the data it uses
Only views contain the store selectors, which are then given as props to the components of the module linked to the view.
SCSS is not scoped
A .scss
file buried in the tree structure doesn’t guarantee that the classes it contains can only be accessed there, even by importing react (formally forbidden by the way: you must use SCSS import), all declared classes are accessible everywhere.
Prefer a judicious choice of root class name for a given module, and use the tree structure available in the SCSS file.
Imports must follow a specific order
ESLint is setup to automatically sort imports in four import groups, each of them sorted in alphabetical order :
- React
- External libraries
- Internal absolute path files
- Internal relative path files
Each of these groups will be separated by an empty line.
ESLint will trigger a warning if you don’t follow these guidelines.
Import links must be absolute
You must use the full path for all your imports.
Import links can be relative only if the file to be imported is in the same directory.
TypeScript
import & export
ESLint and Typescript are setup to enforce typed imports for an exported type.
This current setup allows to :
- Auto typing the import when using a type in a file with autocompletion.
- Getting 2 errors from each package asking to use type import if you didn’t.
When an import
or export
contains only types, indicate it with the type
keyword.
export type { Direction, DirectionalTrackRange as TrackRange };
import type { typedEntries, ValueOf } from "utils/types";
When an import
contains not only types, it will be structured like below, in alphabetical order.
import {
osrdEditoastApi,
type ScenarioCreateForm,
} from "common/api/osrdEditoastApi";
This allows to:
- Improve the performance and analysis process of the compiler and the linter.
- Make these declarations more readable; we can clearly see what we are importing.
- Avoid dependency cycles:
The error disappears with the type
keyword
- Make final bundle lighter (all types disappear at compilation)
5 - Commit conventions
A few advises and rules about commit messages
Commit style
The overall format for git commits is as follows:
component1, component2: imperative description of the change
Detailed or technical description of the change and what motivates it,
if it is not entirely obvious from the title.
- the commit message, just like the code, must be in english (only ASCII characters for the title)
- there can be multiple components separated by
:
in case of hierarchical relationships, with ,
otherwise - components are lower-case, using
-
, _
or .
if necessary - the imperative description of the change begins with a lower-case verb
- the title must not contain any link (
#
is forbidden)
Ideally:
- the title should be self-explanatory: no need to read anything else to understand it
- the commit title is all lower-case
- the title is clear to a reader not familiar with the code
- the body of the commit contains a detailled description of the change
An automated check is performed to enforce as much as possible this formating.
Counter-examples of commit titles
To be avoided entirely:
component: update ./some/file.ext
: specify the update itself rather than the file, the files
are technical elements welcome in the body of the commitcomponent: fix #42
: specify the problem fixed in the title, links (to issue, etc.) are very
welcome in commit’s bodywip
: describe the work (and finish it)
Welcome to ease review, but do not merge:
fixup! previous commit
: an autosquash must be run before the mergeRevert "previous commit of the same PR"
: both commits must be dropped before merging
The Developer Certificate of Origin (DCO)
All of OSRD’s projects use the DCO (Developer Certificate of Origin) to address
legal matters. The DCO helps confirm that you have the rights to the code you
contribute. For more on the history and purpose of the DCO, you can read The
Developer Certificate of Origin
by Roscoe A. Bartlett.
To comply with the DCO, all commits must include a Signed-off-by line.
How to sign a commit using git in a shell ?
To sign off a commit, simply add the -s
flags to your git commit
command,
like so:
git commit -s -m "Your commit message"
This also applies when using the git revert
command.
How to do sign a commit using git in Visual Studio Code (VS Code) ?
Now, go in Files
-> Preferences
-> Settings
, search for and activate
the Always Sign Off setting.
Finally, when you’ll commit your changes via the VS Code interface, your commits
will automaticaly be signed-off.
Continue towards sharing your changes ‣
6 - Share your changes
How to submit your code modifications for review?
The author of a pull request (PR) is responsible for its “life cycle”. He is responsible for contacting the various parties involved, following the review, responding to comments and correcting the code following review (you could also check dedicated page about code review).
In the case of a large PR, don’t hesitate to ask several reviewers to organize themselves, or even to carry out the review together, reviewers and author.
Open a pull request
Once your changes are ready, you have to request integration with the dev
branch.
If possible:
- Make PR of logical and atomic units too (avoid mixing refactoring, new features and bug fix at the same time).
- Add a description to PRs to explain what they do and why.
- Help the reviewer by following advice given in mtlynch article.
- Add tags
area:<affected_area>
to show which part of the application have been impacted. It can be done through the web interface.
Take feedback into account
Once your PR is open, other contributors can review your changes:
- Any user can review your changes.
- Your code has to be approved by a contributor familiar with the code.
- All users are expected to take comments into account.
- Comments tend to be written in an open and direct manner.
The intent is to efficiently collaborate towards a solution we all agree on.
- Once all discussions are resolved, a maintainer integrates the change.
For large PRs that are bound to evolve over time, keeping corrections during review in separate
commits helps reviewers. In the case of multiple reviews by the same person, this can save full
re-review (ask for help if necessary):
- If you believe somebody forgot to review / merge your change, please speak out, multiple times if needs be.
Review cycle
A code review is an iterative process.
For a smooth review, it is imperative to correctly configure your github notifications.
It is advisable to configure OSRD repositories as “Participating and @mentions”. This allows you to be notified of activities only on issues and PRs in which you participate.
Maintainers are automatically notified by the CODEOWNERS
system. The author of a PR is responsible for advancing their PR through the review process and manually requesting maintainer feedback if necessary.
sequenceDiagram
actor A as PR author
actor R as Reviewer/Maintainer
A->>R: Asks for a review, notifying some people
R->>A: Answers yes or no
loop Loop between author and reviewer
R-->>A: Comments, asks for changes
A-->>R: Answers to comments or requested changes
A-->>R: Makes necessary changes in dedicated "fixups"
R-->>A: Reviews, tests changes, and comments again
R-->>A: Resolves requested changes/conversations if ok
end
A->>R: Rebase and apply fixups
R->>A: Checks commits history
R->>A: Approves or closes the PR
Note left of R: & Merges if maintainer
Finally continue towards tests ‣
7 - Tests
Recommandations for testing purpose
Back-end
- Integration tests are written with pytest in the
/tests
folder. - Each route described in the
openapi.yaml
files must have an integration test. - The test must check both the format and content of valid and invalid responses.
Front-end
The functional writing of the tests is carried out with the Product Owners, and the developers choose a technical implementation that precisely meets the needs expressed and fits in with the recommendations presented here.
We use Playwright to write end-to-end tests, and vitest to write unit tests.
The browsers tested are currently Firefox and Chromium.
Basic principles
- Tests must be short (1min max) and go straight to the point.
- Arbitrary timeouts are outlawed; a test must systematically wait for a specific event. It is possible to use polling (retry an action - a click for example - after a certain time) proposed in the Playwright’s API.
- All tests must be parallelizable.
- Tests must not point to or wait for text elements from the translation, prefer the DOM tree structure or place specific
id
. - We’re not testing the data, but the application and its functionality. Data-specific tests should be developed in parallel.
Data
The data tested must be public data.
The data required (infrastructure and rolling stock) for the tests are offered in the application’s json
files, injected at the start of each test and deleted at the end, regardless of its result or how it is stopped, including with CTRL+C
.
This is done by API calls in typescript before launching the actual test.
The data tested is the same, both locally and via continuous integration.
Atomicity of a test
Each test must be atomic: it is self-sufficient and cannot be divided.
A test will target a single feature or component, provided it is not too large. A test will not test an entire module or application; it will necessarily be a set of tests, in order to preserve test atomicity.
If a test needs elements to be created or added, these operations must be carried out by API calls in typescript upstream of the test, as is done for adding data. These elements must be deleted at the end of the test, regardless of the result or how it is stopped, including by CTRL+C
.
This allows tests to be parallelized.
However, in certain cases where it is relevant, a test may contain several clearly explained and justified test subdivisions (several test()
in a single describe()
).
Example of a test
The requirement: “We want to test the addition of a train to a timetable”.
- add the test infrastructure and rolling stock to the database by API calls.
- create project, study and scenario with choice of test infrastructure by API calls.
- start the test, clicking on “add one or more trains” until the presence of the trains in the timetable is verified
- the test passes, fails or is stopped, the project, study and scenario are deleted, along with the test rolling stock and infrastructure by API calls.
NB: the test will not test all the possibilities offered by the addition of trains; this should be a specific test which would test the response of the interface for all scenarios without adding trains.
Continue towards write code ‣