Skip to content

Script to create data dummy for a database, configuring each column specifically for the type of data you want to have.

License

Notifications You must be signed in to change notification settings

caidevOficial/Python_MockDataGen

Repository files navigation

Caidev Pyhton


caidevoficial

caidevoficial


⚡ GitHub Stats

⚡ Most Used Languages

def UpgradeFunction():
    message = "Upgrading my skills [Py Version!]"
    return message

MockData Generator v3.1.24

The MockData Generator is a tool that allows, through some configurations, to create random records in a document with a '.csv' format or '.sql' for tables, based on their column structure and thus later to be able to load them into bigdata.

🚀 0.0 Starting ⤵️

Main <- You will find the module in charge of running the entire script here.

FileHandle <- You will find the module in charge of opening and creating files here.

GetData <- You will find the module in charge of collecting data and creating the mockdata here.

Search <- Module in charge of searching if a file with old data already exists.

Create <- Module in charge of creating the data according the configurations (SQL or CSV).

📋 0.1 Pre-requirements ⤵️

➡️ Install the libraries specified in 'requirements.txt' Especially the Faker library

⚙️ 1.0 Configuration process ⤵️

➡️ First of all, you have to pass some parameters for the script to work correctly, we will detail below.

🔩 1.1 Main configuration.

➡️ Inside of "Configurations.json" file. ⤵️

{
    "Configurations":{
        "DatasetName":"MyDataset",
        "DatasetFileToOpen":"MyDataset.config.json",
        "NameOfDatasetToSaveInJson":"MyNewDataset.json",
        "Directory_To_Save_csvFiles":"CSV_Files",
        "Directory_To_Save_jsonFiles":"JSON_Files",
        "Directory_To_Save_sqlFiles":"SQL_Files",
        "SQL_Format":true
    }
}

➡️ The field 'DatasetName' refers to the name of the dataset in question, this field will be used as part of the name of the files generated by the script (the csv with the datamock and the json with the pk of each dataset table).

➡️ The field 'DatasetFileToOpen' refers to the file that the script will open, where the table settings must be inside to be able to do the mocking data.

➡️ The field 'NameOfDatasetToSaveInJson' refers to the file with json format that the script will generate, where it will contain the pk of each of the dataset tables, the recommended format is: NameOfTheDataSet.json, where 'NameOfTheDataSet' will be the name of the data set.

➡️ The field 'Directory_to_save_csvFiles', 'Directory_to_save_jsonFiles' and 'Directory_to_save_sqlFiles' refer to the directories that will be created to store the csv, json and sql files respectively, generated by the script.

➡️ The field 'SQL_Format' refers if the file created is in format sql (true) or csv (false).

⚙️ 1.2 Configuration of the dataset. ⤵️

➡️ The name of this file must be the one specified in 'DatasetFileToOpen' in the 'Configurations.json' file. The configuration instructions can be found in the following Link

🛠️ 2.0 Script operation. ⤵️

➡️ As mentioned before, the libraries contained in 'requirements.txt' must be installed for their correct operation. The script will open the Configurations.json file to save the variables set by the user in its environment, it will search if there is already a json with tables and pks in the directory to avoid re-creating those tables and stepping on the old existing data. It will use the variable 'DatasetFileToOpen' to open the file with the dataset table structure and thus be able to iterate each of the dataset tables and within them, each of the column structures. As each column iterates, it will generate its data based on your settings. Later create a csv with said data for each table as well as a json file containing all the stations of each table.


⚠️ Limitations ⤵️

The script does not create data following a logic, but creates random data following the configuration patterns of the dataset.


📄 License ⤵️

This project is under license [GNU v3] - read the file LICENSE.md for details.


📌 Technologies used. ⤵️

Pyhton

Python

VSC

VS Code


Where to find me: 🌎⤵️

Facu
🤴 Facu Falcone Junior Developer
GitHub Github
LinkedIn LinkedIn
Invitame un café en cafecito.app CafecitoApp
Buy Me a Coffee at ko-fi.com Ko-Fi
⬆️ Go Top ➡️ Go to Faker WebPage

About

Script to create data dummy for a database, configuring each column specifically for the type of data you want to have.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages