I am trying to build logic in flow to compare two JSON arrays. The arrays are created with in the flow based on source data, the arrays use the same schema, so the object within the array are comparable.
One of the arrays (source) is based on the data in an Excelsheet. The other array is based on the data from a SharePoint list. I would like to extract al objects from the source array that are not available in the array based on the SharePoint list. The result should be a array that contains only objects that need updating in the SharePoint list.
I built the logic for this using a Apply to each loop, but this takes to long. It takes 45 minutes to complete with 1000 items and maximum concurrency. My arrays will contain +/- 40.000 items so this takes forever to complete.
Is there a way to speed thing up?
The process i have built so far:
JSON array built based on data from Excel
JSON array built based on data from SharePoint
There's a way that you could do this quicker, but it involves SharePoint.
You need to add a constraint to the column that you're using to compare not to allow duplicate values. Then dump the second array there and use the error paths to "ignore" the duplicates. After that, fetch all values in SharePoint, build the final array, and delete the values in SharePoint for future runs.
This will "catch" the invalid insert and continue.
I think this would be faster since SharePoint takes care of the comparison, and inserting stuff is quite fast.
If I have answered your question, please mark your post as Solved.
If you like my response, please give it a Thumbs Up.
Thanks for your reply @manuelstgomes,
The thing is that the only value that remaines constand is the DebiteurNr. The other values can change.
Most of the time an item already exists with the DebiteurNr, but it need to be updated because on of the other values in the object is changed. If i understand your solution it only work for items that are added to the source which are not yet available in the SharePoint list.
The bigger problem i am trying to fix is:
So my idea was to minimize the amount of items to be updated so i can minimize communication with SharePoint. This wil need to be done once a week and there are probably a maximum of a 1000 items changed instead of 40.000.
Hi @wvp ,
If you have done the following settings for Apply to each, but the effect is still not satisfactory, I think this has indeed exceeded the limit that Flow can handle, because there is a lot of data to be compared.
You could try to use multiple Flows to achieve your needs, the same configuration, but group all items by ID.
Each Flow processes equal amounts of items. For example, if you have 1,000 items, divide them into five groups and process them in five identical flows.
You can filter each group of items by ID in each Flow.
Hope this helps.
Hi @v-bacao-msft ,
I indeed seem to have hit the limits of Flow. If only it would be faster to loop through a simple JSON array, I was hoping this would be more efficiënt in Flow. Concurrency control does not speed things up, it seems to slow them down even more.
As comparison i tried to do the same in Azure Logic App which did the whole transaction in 1.5 hours. Unfortunately at this moment this is not an option.
I will investigate multiple Flows!
I ran into the same issue and the following solution was very efficient:
This should output a list of items from your first array that do not exist in the second.
Learn how to create your own user groups today!
Check out the new Power Platform Community Connections gallery!
Join us, in-person, December 7–9 in Las Vegas, for the largest gathering of the Microsoft community in the world.