In Powerapps/Dataverse I can export tables to a Gen2 data link via Synapse Link. One configured, the data arrives happily in my Data lake in CSV format.
However, I now find myself subject to the age-old problem of delimiting characters in data. In this instance, if there's a comma in the individual field data, then any mapping I have, say from a CopyData task, are corrupted.
There seems to be dataflow pipelines within ADF that read and infer the fields correctly, but DF pipelines cannot be used in conjunction with a self-hosted runtime, which is what I require. I have also seen advice that I should transform it to a Parquet file first and then export from there, but that will require more Azure resources at the inherent extra cost.
This seems to be absolutely fundamental, and would break this Microsoft-prescribed pattern and solution for a large proportion of use-cases.
The process for files seems to be a bit of a black box, and I can't see a way of stipulating a custom delimiter as part of this.
Does anyone know how I might solve this?
User | Count |
---|---|
19 | |
11 | |
8 | |
5 | |
5 |
User | Count |
---|---|
31 | |
30 | |
16 | |
12 | |
7 |