Skip to content
Snippets Groups Projects
Select Git revision
  • 39ca4c527a927662afc143796ecebe92a70fc1b7
  • master default protected
  • 67-intend-get-area-of-research
  • 35-enhancing-rasa-recognizing-lower-case-entities
  • 66-docker-refactor-dockerfile-to-reduce-number-of-layers
  • 48-create-get_date_of_death-function
  • 34-refactoring-rasa-documentation-naming
  • 12-rasa-trainigsdaten-format
  • 12-rasa-trainingsdaten-format
  • v1.1
  • v1.0
11 results

wiki-rasa

  • Open with
  • Download source code
  • Your workspaces

      A workspace is a virtual sandbox environment for your code in GitLab.

      No agents available to create workspaces. Please consult Workspaces documentation for troubleshooting.

  • Lucas Schons's avatar
    Lucas Schons authored
    Fix strings as factors
    
    See merge request !17
    39ca4c52
    History
    Name Last commit Last update
    docs
    r
    .gitignore
    README.md

    Wiki Rasa

    Installation

    2 Optionen:

    1. Option: Python 3.6.6 installiert haben oder downgraden von 3.7 (wird von Tensorflow noch nicht unterstützt)
      Dann rasa core mit pip install rasa_core und rasa nlu mit pip install rasa_nlu installieren.
    2. Option: Anaconda installieren, eine Python 3.6.6 Umgebung erstellen und dann rasa installieren.

    Example Project zum laufen bringen

    stories.md, domain.yml, nlu.md downloaden.
    nlu_config.yml mit folgendem Inhalt erstellen:

    language: en
    pipeline: tensorflow_embedding

    Dann kann das Modell trainiert werden mit:

    # rasa core
    python -m rasa_core.train -d domain.yml -s stories.md -o models/dialogue
    
    # Natural Language processing 
    python -m rasa_nlu.train -c nlu_config.yml --data nlu.md -o models --fixed_model_name nlu --project current --verbose

    Danach kann man mit dem Bot reden mit:

    python -m rasa_core.run -d models/dialogue -u models/current/nlu

    R Scripts

    PhysicistsList.R

    Will crawl wikipedias List of Physicists for all physicist names and use that list to download the corresponding articles from the wikipedia api. Will generate a csv containing the gathered articles in the data directory as well as a RDS object containing the data as binary.