Hi Guys, so I have this kind of issue where in when I fetch data from D365 then write it to Excel stored in sharepoint, I am having a discrepancy issue with it. So here is my process in detail
1. Get List of data from D365.
2. Do "Apply to each" with the list fetch from #1
3. Inside the loop, write each data per row iterated from #2 in Excel file stored in Sharepoint.
4. After done, delay 5m to ensure all is written.
5. Fetch the file from sharepoint then distribute in email.
So the issue here for example, I have 1000 rows of data fetched from #1 but I am receiving a mail sometimes with just 800+ or so rows in the excel file. And sometimes, data writing was very smooth with no missing data on the excel. So I want to know how can we prevent this from happening? Thank you
Hey @Anonymous
Could you be seeing the effects of parallelism? Where in the Apply to each, which are normally run in parallel, are both trying to update your excel file & one overwrites the other?
If my reply helped, consider marking it as answered. Thanks for your time in sharing your issues and helping the community
Hi,
I am only using concurrency level 1 since it cause write request failure to sharepoint sometimes if I set it to a higher value. Also what was weird here was for example I have 200 rows fetched in D365 then I write it in excel file residing at sharepoint directory. Run history says that it successfully writes all 200 rows but checking on excel at sharepoint, sometimes its not and likely less than 200 or something. So what would be the best solution for this?
Best Regards,
Kydo
Learn to digitize and optimize business processes and connect all your applications to share data in real time.
Come together to explore latest innovations in code and application development—and gain insights from experts from around the world.
At the monthly call, connect with other leaders and find out how community makes your experience even better.