This repository has been archived on 2026-03-27. You can view files and clone it. You cannot open issues or pull requests or push a commit.
2026-03-27 13:35:43 +01:00
2026-03-27 13:35:43 +01:00
2026-03-27 13:35:43 +01:00
2026-03-27 13:35:43 +01:00
2026-03-27 13:35:43 +01:00
2026-03-27 13:35:43 +01:00
2026-03-27 13:35:43 +01:00
2026-03-27 13:35:43 +01:00
2026-03-27 13:35:43 +01:00

  • download the neccesary dataset files to data/datasets as csv (not zip). Move all tsv files from LIAR zip file direcly into the datasets folder.
  • run setup.py to setup nltk, and clean and split the datasets. It takes long, please wait.
  • run main.py from the src diretory to test the models. The function requrires the model type, model file, and dataset to be passed as parameters. Here is an example: python main.py --model_type logistic --model_file logistic.model --data_file 995,000_rows.parquet The model files can be found in the models directory (not the one in src), the data files can be found in data/testing (pass LIAR.parquet to test on LIAR dataset). The model types and more information including how to train models can be found with python main.py --help.
Description
codeberg went down right at the end of the project, this is backup just in case
Readme 53 MiB
Languages
Jupyter Notebook 90.7%
Python 9.1%
Nix 0.2%