FlagAI CII Best Practices Python application 简体中文


). It also supports BERT, RoBERTa, GPT2, T5, and models from Huggingface Transformers.

  • It provides APIs to quickly download and use those pre-trained models on a given text, fine-tune them on widely-used datasets collected from SuperGLUE and CLUE benchmarks, and then share them with the community on our model hub. It also provides prompt-learning toolkit for few shot tasks.

  • These models can be applied to (Chinese/English) Text, for tasks like text classification, information extraction, question answering, summarization, and text generation.

  • FlagAI is backed by the three most popular data/model parallel libraries — PyTorch/Deepspeed/Megatron-LM — with seamless integration between them. Users can parallel their training/testing process with less than ten lines of code.

  • The code is partially based on GLM, Transformers and DeepSpeedExamples.

    Requirements and Installation

    • PyTorch version >= 1.8.0
    • Python version >= 3.8
    • For training/testing models on GPUs, you’ll also need install CUDA and NCCL

    To install FlagAI with pip:

    pip install -U flagai
    • [Optional]To install FlagAI and develop locally:

    git clone https://github.com/BAAI-Open/FlagAI.git
    python setup.py install
    • [Optional] For faster training install NVIDIA’s apex

    git clone https://github.com/NVIDIA/apex
    cd apex
    pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./
    
    • [Optional] For ZeRO optimizers install DEEPSPEED

    git clone https://github.com/microsoft/DeepSpeed
    cd DeepSpeed
    DS_BUILD_CPU_ADAM=1 DS_BUILD_AIO=1 DS_BUILD_UTILS=1 pip install -e .
    ds_report # check the deespeed status
    
    • [Tips] For single-node docker enviroments, we need to setup ports for your ssh. e.g., [email protected] with port 7110

    >>> vim ~/.ssh/config
    Host 127.0.0.1
        Hostname 127.0.0.1
        Port 7110
        User root
    
    • [Tips] For multi-node docker enviroments, generate ssh keys and copy the public key to all nodes (in ~/.ssh/)
    >>> ssh-keygen -t rsa -C "[email protected]"
    

    Quick Start

    We provide many models which are trained to perform different tasks. You can load these models by AutoLoader to make prediction. See more in FlagAI/quickstart.

    Load model and tokenizer

    We provide the AutoLoad class to load the model and tokenizer quickly, for example:

    from flagai.auto_model.auto_loader import AutoLoader
    
    auto_loader = AutoLoader(
        task_name="title-generation",
        model_name="BERT-base-en"
    )
    model = auto_loader.get_model()
    tokenizer = auto_loader.get_tokenizer()

    This example is for the title_generation task, and you can also model other tasks by modifying the task_name. Then you can use the model and tokenizer to finetune or test.

    Predictor

    We provide the Predictor class to predict for different tasks, for example:

    from flagai.model.predictor.predictor import Predictor
    predictor = Predictor(model, tokenizer)
    test_data = [
        "Four minutes after the red card, Emerson Royal nodded a corner into the path of the unmarked Kane at the far post, who nudged the ball in for his 12th goal in 17 North London derby appearances. Arteta's misery was compounded two minutes after half-time when Kane held the ball up in front of goal and teed up Son to smash a shot beyond a crowd of defenders to make it 3-0.The goal moved the South Korea talisman a goal behind Premier League top scorer Mohamed Salah on 21 for the season, and he looked perturbed when he was hauled off with 18 minutes remaining, receiving words of consolation from Pierre-Emile Hojbjerg.Once his frustrations have eased, Son and Spurs will look ahead to two final games in which they only need a point more than Arsenal to finish fourth.",
    ]
    
    for text in test_data:
        print(
            predictor.predict_generate_beamsearch(text,
                                                  out_max_length=50,
                                                  beam_size=3))

    Pretrained Models and examples

    This session explains how the base NLP classes work, how you can load pre-trained models to tag your text, how you can embed your text with different word or document embeddings, and how you can train your own language models, sequence labeling models, and text classification models. Let us know if anything is unclear. See more in FlagAI/examples.

    Tutorials

    We provide a set of quick tutorials to get you started with the library:

    Contributing

    Thanks for your interest in contributing! There are many ways to get involved; start with our contributor guidelines and then check these open issues for specific tasks.

    Contact us

    Scan wechat QR code

    License

    The majority of FlagAI is licensed under the Apache 2.0 license, however portions of the project are available under separate license terms:

    GitHub

    View Github