Refactored structure // Adapted structure from python wiki authored by thiuda's avatar thiuda
blank
copied from prev page:
- Install any packages you want to use. For our example, switch to the package manager with "\]" and run:
```
add LowRankModels
add DecisionTree
```
- Congratulations! As an example, run:
```
using ConML
using LowRankModels # some algorithms for Construction and Reconstruction, use what you want
using DecisionTree
toy = open("Path\\To\\toy.csv", "r")
readline(toy) # skip header
# empty arrays for the data
T = Vector{Int}()
Sigma = Vector{String}()
Z = Vector{String}()
data = Vector{Vector{Float64}}()
# separate data from metadata columns
for rline in readlines(toy)
line = split(rline,',')
push!(T, parse(Int,line[353]))
push!(Sigma, string(line[354]))
push!(Z, string(line[355]))
push!(data, map(x -> parse(Float64,x),line[2:352]))
end
# cast it into a proper format for the algorithm
block = ConML.VMS{Float64}(T, Sigma, Z, data)
# set parameters
# we have to specify values for which no defaults exist
par = ConML.ParametersConML(LearnBlockMinimum = 1, maxCategories = 10, MinCategorySize = 20, maxFilterFeatures = 1000, maxFilterSamples = 2000)
# create empty knowledge base
kb = ConML.KnowledgeBase{Int}(ConML.VMS{Int}(),Vector{ConML.MachineModel}())
# create steps for the algorithm
myConstruction = ConML.Construct([LowRankModels.KMeans(k=2), LowRankModels.KMeans(k=3), LowRankModels.KMeans(k=4)])
myReconstruction = ConML.Reconstruct([DecisionTree.DecisionTreeClassifier(max_depth=2), DecisionTree.AdaBoostStumpClassifier(n_iterations=10)])
featureSelection = ConML.FeatureSelector()
# make a pipeline
learn = ConML.LearnerConML(kb, par, [ConML.searchLearnBlocks, myConstruction, featureSelection, myReconstruction])
# feed it!
learn(block)
```
- To update the packages, navigate to the project folder and do a git pull. Afterwards, go to the julia package manager ("\]") and run "update".
\ No newline at end of file
## Package Installation
Install any packages you want to use. For our example, switch to the package manager with `\]` and run:
```julia
add LowRankModels
add DecisionTree
```
## Quickstart
Congratulations! As an example, run:
```julia
using ConML
using LowRankModels # some algorithms for Construction and Reconstruction, use what you want
using DecisionTree
toy = open("Path\\To\\toy.csv", "r")
readline(toy) # skip header
# empty arrays for the data
T = Vector{Int}()
Sigma = Vector{String}()
Z = Vector{String}()
data = Vector{Vector{Float64}}()
# separate data from metadata columns
for rline in readlines(toy)
line = split(rline,',')
push!(T, parse(Int,line[353]))
push!(Sigma, string(line[354]))
push!(Z, string(line[355]))
push!(data, map(x -> parse(Float64,x),line[2:352]))
end
# cast it into a proper format for the algorithm
block = ConML.VMS{Float64}(T, Sigma, Z, data)
# set parameters
# we have to specify values for which no defaults exist
par = ConML.ParametersConML(LearnBlockMinimum = 1, maxCategories = 10, MinCategorySize = 20, maxFilterFeatures = 1000, maxFilterSamples = 2000)
# create empty knowledge base
kb = ConML.KnowledgeBase{Int}(ConML.VMS{Int}(),Vector{ConML.MachineModel}())
# create steps for the algorithm
myConstruction = ConML.Construct([LowRankModels.KMeans(k=2), LowRankModels.KMeans(k=3), LowRankModels.KMeans(k=4)])
myReconstruction = ConML.Reconstruct([DecisionTree.DecisionTreeClassifier(max_depth=2), DecisionTree.AdaBoostStumpClassifier(n_iterations=10)])
featureSelection = ConML.FeatureSelector()
# make a pipeline
learn = ConML.LearnerConML(kb, par, [ConML.searchLearnBlocks, myConstruction, featureSelection, myReconstruction])
# feed it!
learn(block)
```
## Update
Navigate to the project folder and run `git pull`. Afterwards, go to the julia package manager (`\]`) and run `update`.
\ No newline at end of file