Hi, just looking for some recommendations in relation to testing with the Ai builder. I'm currently using the Prediction Model with some Timesheet entries used to track the length of time worked per project. These are broken down into days, time, client, project, work done and user performing the work.
We use a field called, "Is Correct Rate," as a yes/no field, which I've current set the model to use for predicting future outcomes and use the below fields to influence the data:
Currently, every test I run outputs a score of 98-99%, being run on a data set of 6000+ records and based on some initial testing, is working incorrectly and marking every "Is Correct Rate," field of new records as false. Billable Rate seems to stay consistently as the most influential field.
Any recommendations in regards to why this would constantly score 98-99% and any information with split testing and training to train the Ai builder more relevantly? Thanks in advanced for any info anyone can provide here.
As we discussed, you kept seeing high training score because the training dataset you provided is very imbalanced towards the positive end. And this is not asking you to change anything on your training data to make it more balanced. We would like you to keep using dataset that could represent your production data distribution. And AI Builder team is already aware of the issue that Accuracy isn't a good measurement for imbalanced training set. We will soon release a new grading system to measure your model performance which could cover both balanced dataset and imbalanced. And you can decide whether or not to use that model based on it's grade, is it an A, B, C or D model (A is the best, and D would most likely require you to fix some issue).
And for this particular model, you'll get a B which might reflect the true status better.