There are for example multiple decision tree learners or rule learners. Everyone has different semantics and works differently on the data. If i just can run every one and see which one performs the best is a completely normal approach.
And with k-fold cross validation its very hard to have overfitting.
And with k-fold cross validation its very hard to have overfitting.