backup since codeberg is down
This commit is contained in:
6
README.md
Normal file
6
README.md
Normal file
@@ -0,0 +1,6 @@
|
||||
- download the neccesary dataset files to data/datasets as csv (not zip). Move all tsv files from LIAR zip file direcly into the datasets folder.
|
||||
- run setup.py to setup nltk, and clean and split the datasets. It takes long, please wait.
|
||||
- run main.py from the src diretory to test the models. The function requrires the model type, model file, and dataset to be passed as parameters.
|
||||
Here is an example: python main.py --model_type logistic --model_file logistic.model --data_file 995,000_rows.parquet
|
||||
The model files can be found in the models directory (not the one in src), the data files can be found in data/testing (pass LIAR.parquet to test on LIAR dataset).
|
||||
The model types and more information including how to train models can be found with python main.py --help.
|
||||
Reference in New Issue
Block a user