I would like to port my employer's excel "database" to a more robust system, and dataverse/power apps look like a great option - on paper. Apparently, you can have tables with 1M rows per table, and that would more than accommodate our needs. However, I have major problems with importing the excel sheet into a table. I have tried the "Import data from Excel" option (dropping the file into the "upload here" slot) and I have tried the "Import data" option (power query). The first option just throws errors left and right. And if it goes through, the resulting data is a mess with data being lost and ID's not being properly reflected (the first rows in the ID row are 10, 100, 1000 - skipping everything in between). With the second option, the columns are better preserved, but data is also lost apparently here. I was only able to import around 36k rows, from my excel sheet consisting of 500k rows.
Does anyone know a better way to get all the data into the table?
Solved! Go to Solution.
SOLUTION:
I realize now that I had not let the dataflow run its full course before looking for data at the end of the list. In terms of the weird data (1001 for example), this is just a strange naming convention for 1 when datasets become very large, so the data is actually there.
@peterharket, The way that I have been using is to use the Dataflows for Importing data from Excel Files.
Daniel Christian has this good video to show you how >> https://www.youtube.com/watch?v=21ONiff4hxo
Regards,
Ahmed
If my reply helped you, please give a 👍. And if it has solved your issue, please consider Accept it as the Solution to help other members in the community find it more.
Thanks for the quick reply Ahmed, but this is the second approach I took, and the resulting data is still lossy/incomplete when I use "import data". But do you think it would help if I create the table in dataverse first, and then import the data? Should that really matter?
SOLUTION:
I realize now that I had not let the dataflow run its full course before looking for data at the end of the list. In terms of the weird data (1001 for example), this is just a strange naming convention for 1 when datasets become very large, so the data is actually there.