☑️ FAIR test

fair-test is a library to build and deploy FAIR metrics tests APIs supporting the specifications used by the FAIRMetrics working group.

It aims to enable python developers to easily write, and deploy FAIR metric tests functions that can be queried by various FAIR evaluations services, such as FAIR enough and the FAIRsharing FAIR Evaluator

Feel free to create an issue, or send a pull request if you are facing issues or would like to see a feature implemented.

ℹ️ How it works

The user defines and registers custom FAIR metrics tests in separated files in a specific folder (the metrics folder by default), and start the API.

Built with FastAPI, pydantic and RDFLib. Tested for Python 3.7, 3.8 and 3.9

? Install the package

Install the package from PyPI:

pip install fair-test

? Build a FAIR metrics test API

Checkout the example folder for a complete working app example to get started, including a docker deployment.

If you want to start from a project with everything ready to deploy in production we recommend you to fork the fair-enough-metrics repository.

? Define the API

Create a main.py file to declare the API, you can provide a different folder than metrics here, the folder path is relative to where you start the API (the root of the repository):

from fair_test import FairTestAPI

app = FairTestAPI(
    title='FAIR Metrics tests API',
    description="""FAIR Metrics tests API""",
    license_info = {
        "name": "MIT license",
        "url": "https://opensource.org/licenses/MIT"

Create a .env file to provide informations used for the API, such as contact details and the host URL (note that you don't need to change it for localhost in development), e.g.:

CONTACT_NAME="Vincent Emonet"
CONTACT_EMAIL="[email protected]"
ORG_NAME="Institute of Data Science at Maastricht University"

? Define a FAIR metrics test

Create a a1_my_test.py file in the metrics folder with your test:

from fair_test import FairTest

class MetricTest(FairTest):
    metric_path = 'a1-check-something'
    applies_to_principle = 'A1'
    title = 'Check something'
    description = """Test something"""
    author = 'https://orcid.org/0000-0000-0000-0000'
    metric_version = '0.1.0'

    def evaluate(self):
        self.info(f'Checking something for {self.subject}')
        g = self.getRDF(self.subject, use_harvester=False)
        if len(g) > 0:
            self.success(f'{len(g)} triples found, test sucessful')
            self.failure('No triples found, test failed')
        return self.response()
A few common operations are available on the self object, such as logging or retrieving RDF metadata from a URL.

? Deploy the API

You can then run the metrics tests API on http://localhost:8000 with uvicorn, e.g. with the code provided in the example folder:

cd example
pip install -r requirements.txt
uvicorn main:app --reload
Checkout in the example/README.md for more details, such as deploying it with docker.

?‍? Development

? Install for development

Clone the repository and install the dependencies locally for development:

git clone https://github.com/MaastrichtU-IDS/fair-test
cd fair-test
pip install -e .

You can try to use a virtual environment to avoid conflicts, if you face issues:

# Create the virtual environment folder in your workspace
python3 -m venv .venv
# Activate it using a script in the created folder
source .venv/bin/activate

✔️ Run the tests

Install pytest for testing:

pip install pytest

Run the tests locally (from the root folder) and display prints:

pytest -s

? Projects using fair-test

Here are some projects using fair-test to deploy FAIR test services:

GitHub - MaastrichtU-IDS/fair-test at pythonawesome.com
☑️ A library to build and deploy FAIR metrics tests APIs in Python. - GitHub - MaastrichtU-IDS/fair-test at pythonawesome.com