We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
At the moment, creating the configuration files is still a patchy process, by copying them manually from the sample_data location.
Mikado need to be able to do a "mikado.py configure" as a first step to create the necessary files.
Ideally, it would have two options:
Possible inspiration: sklearn.datasets.load_iris(); here is the code:
def load_iris(): module_path = dirname(file) with open(join(module_path, 'data', 'iris.csv')) as csv_file: data_file = csv.reader(csv_file) temp = next(data_file) n_samples = int(temp[0]) n_features = int(temp[1]) target_names = np.array(temp[2:]) data = np.empty((n_samples, n_features)) target = np.empty((n_samples,), dtype=np.int)
for i, ir in enumerate(data_file): data[i] = np.asarray(ir[:-1], dtype=np.float) target[i] = np.asarray(ir[-1], dtype=np.int) with open(join(module_path, 'descr', 'iris.rst')) as rst_file: fdescr = rst_file.read() return Bunch(data=data, target=target, target_names=target_names, DESCR=fdescr, feature_names=['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)'])
The text was updated successfully, but these errors were encountered:
Progress in 0c2c04b. Switched to JSON schema, progress on the migration is good but still ongoing.
Sorry, something went wrong.
Great progress on issue #1, now I can produce both simple and full co…
d67e99b
…nfiguration files from the CLI.
Solved in 5b972be.
Merge pull request #1 from lucventurini/development
68db67d
Development
No branches or pull requests
At the moment, creating the configuration files is still a patchy process, by copying them manually from the sample_data location.
Mikado need to be able to do a "mikado.py configure" as a first step to create the necessary files.
Ideally, it would have two options:
Possible inspiration: sklearn.datasets.load_iris(); here is the code:
def load_iris():
module_path = dirname(file)
with open(join(module_path, 'data', 'iris.csv')) as csv_file:
data_file = csv.reader(csv_file)
temp = next(data_file)
n_samples = int(temp[0])
n_features = int(temp[1])
target_names = np.array(temp[2:])
data = np.empty((n_samples, n_features))
target = np.empty((n_samples,), dtype=np.int)
The text was updated successfully, but these errors were encountered: