cancel
Showing results for 
Search instead for 
Did you mean: 
Reply
Kscott10165
Employee
Employee

Referencing common table in files with various names

To set the stage, I am working on building a flow where users are able to drop excel files in a sharepoint folder. The flow references one specific table and, for each row in the table, creates a new entry on a list with the information of that table row.

 

The hangup is being able to reference a specific table when there is no set file name and referencing the columns in that table when creating the new list entry. Any ideas?

1 ACCEPTED SOLUTION

Accepted Solutions
grantjenkins
Super User
Super User

Here's a couple of examples that will hopefully get what you're after.

 

I have the following SharePoint List.

grantjenkins_0-1668646557179.png

 

And I have a folder in my library where the Excel files will be uploaded to. It doesn't matter what the file names are, as long as they have a Table called SalesTable and the appropriate columns. As an example:

grantjenkins_1-1668646652222.png

 

 

EXAMPLE 1 - Using JSON Schema

 

The full flow is below. I'll go into each of the actions.

grantjenkins_2-1668646760013.png

 

When a file is created (properties only) is set to my Dropoff folder.

grantjenkins_3-1668646803185.png

 

List rows present in a table uses the Full Path from the trigger and has the Table name hardcoded to SalesTable.

grantjenkins_4-1668646867204.png

 

Before moving to the next step, you should save and run the flow, then copy the output from List rows present in a table. We will use this to generate the schema in the next step. Once you've copied the output, go back to edit mode and proceed with the next step.

 

Parse JSON takes in the value from List rows present in a table. To get the schema click on Generate from sample, then paste in the output you copied from the previous step and click on Done.

grantjenkins_5-1668647100819.png

 

In this example, the schema would look like the following:

{
    "type": "array",
    "items": {
        "type": "object",
        "properties": {
            "@@odata.etag": {
                "type": "string"
            },
            "ItemInternalId": {
                "type": "string"
            },
            "Title": {
                "type": "string"
            },
            "First Name": {
                "type": "string"
            },
            "Last Name": {
                "type": "string"
            },
            "Email": {
                "type": "string"
            },
            "Country": {
                "type": "string"
            },
            "Company": {
                "type": "string"
            }
        },
        "required": [
            "@@odata.etag",
            "ItemInternalId",
            "Title",
            "First Name",
            "Last Name",
            "Email",
            "Country",
            "Company"
        ]
    }
}

 

The @@odata.etag and ItemInternalId fields are auto-generated. You can just ignore them or remove them from the schema. Also, if some of your columns could potentially contain empty data (not all fields filled in) then you would need to remove those fields from the "required" section. For this example, I'll remove the auto-generated fields and assume that Country is optional (not required). My updated schema would look like:

{
    "type": "array",
    "items": {
        "type": "object",
        "properties": {
            "Title": {
                "type": "string"
            },
            "First Name": {
                "type": "string"
            },
            "Last Name": {
                "type": "string"
            },
            "Email": {
                "type": "string"
            },
            "Country": {
                "type": "string"
            },
            "Company": {
                "type": "string"
            }
        },
        "required": [
            "Title",
            "First Name",
            "Last Name",
            "Email",
            "Company"
        ]
    }
}

 

The Apply to each iterates over each of the rows (using the Body from Parse JSON), and for each one, creates a new item in the list.

grantjenkins_6-1668647956958.png

 

And that's it - you should now have items been added into your list.

 

 

 

EXAMPLE 2 - NOT using JSON Schema

 

As an alternative option, you could bypass the Parse JSON and reference the fields directly. See full flow below:

grantjenkins_7-1668648800383.png

 

For the Apply to each, you would now just pass in value from List rows present in a table.

 

And for each of the values you would use the following expressions:

//Example
items('Apply_to_each')?['Excel_Column_Name']

//Actual
items('Apply_to_each')?['Title']
items('Apply_to_each')?['First Name']
items('Apply_to_each')?['Last Name']
items('Apply_to_each')?['Email']
items('Apply_to_each')?['Country']
items('Apply_to_each')?['Company']

 

This would give you the same result as option 1 and might be a bit easier for your scenario.


----------------------------------------------------------------------
If I've answered your question, please mark the post as Solved.
If you like my response, please consider giving it a Thumbs Up.

View solution in original post

4 REPLIES 4
grantjenkins
Super User
Super User

A few questions:

  1. Is the trigger, When a file is created (properties only)?
  2. Will the Table name always be the same, or could it be anything?
  3. If not a specific Table name, will each Excel File potentially contain multiple Tables, or just one?
  4. Will the columns in the Table always be the same?

----------------------------------------------------------------------
If I've answered your question, please mark the post as Solved.
If you like my response, please consider giving it a Thumbs Up.

  1. Is the trigger, When a file is created (properties only)? Yes. 
  2. Will the Table name always be the same, or could it be anything? The table name will always be the same. "Employees" the excel file will be a standardized form but they are often renamed.
  3. If not a specific Table name, will each Excel File potentially contain multiple Tables, or just one? There will be a few tables on the sheet but only one will need to be referenced. 
  4. Will the columns in the Table always be the same? The columns will always be the same as will the column names. There will be a varying number of rows in the table depending on the number of personnel being submitted.

Thanks for the reply! This flow could save my team a lot of time.

grantjenkins
Super User
Super User

Here's a couple of examples that will hopefully get what you're after.

 

I have the following SharePoint List.

grantjenkins_0-1668646557179.png

 

And I have a folder in my library where the Excel files will be uploaded to. It doesn't matter what the file names are, as long as they have a Table called SalesTable and the appropriate columns. As an example:

grantjenkins_1-1668646652222.png

 

 

EXAMPLE 1 - Using JSON Schema

 

The full flow is below. I'll go into each of the actions.

grantjenkins_2-1668646760013.png

 

When a file is created (properties only) is set to my Dropoff folder.

grantjenkins_3-1668646803185.png

 

List rows present in a table uses the Full Path from the trigger and has the Table name hardcoded to SalesTable.

grantjenkins_4-1668646867204.png

 

Before moving to the next step, you should save and run the flow, then copy the output from List rows present in a table. We will use this to generate the schema in the next step. Once you've copied the output, go back to edit mode and proceed with the next step.

 

Parse JSON takes in the value from List rows present in a table. To get the schema click on Generate from sample, then paste in the output you copied from the previous step and click on Done.

grantjenkins_5-1668647100819.png

 

In this example, the schema would look like the following:

{
    "type": "array",
    "items": {
        "type": "object",
        "properties": {
            "@@odata.etag": {
                "type": "string"
            },
            "ItemInternalId": {
                "type": "string"
            },
            "Title": {
                "type": "string"
            },
            "First Name": {
                "type": "string"
            },
            "Last Name": {
                "type": "string"
            },
            "Email": {
                "type": "string"
            },
            "Country": {
                "type": "string"
            },
            "Company": {
                "type": "string"
            }
        },
        "required": [
            "@@odata.etag",
            "ItemInternalId",
            "Title",
            "First Name",
            "Last Name",
            "Email",
            "Country",
            "Company"
        ]
    }
}

 

The @@odata.etag and ItemInternalId fields are auto-generated. You can just ignore them or remove them from the schema. Also, if some of your columns could potentially contain empty data (not all fields filled in) then you would need to remove those fields from the "required" section. For this example, I'll remove the auto-generated fields and assume that Country is optional (not required). My updated schema would look like:

{
    "type": "array",
    "items": {
        "type": "object",
        "properties": {
            "Title": {
                "type": "string"
            },
            "First Name": {
                "type": "string"
            },
            "Last Name": {
                "type": "string"
            },
            "Email": {
                "type": "string"
            },
            "Country": {
                "type": "string"
            },
            "Company": {
                "type": "string"
            }
        },
        "required": [
            "Title",
            "First Name",
            "Last Name",
            "Email",
            "Company"
        ]
    }
}

 

The Apply to each iterates over each of the rows (using the Body from Parse JSON), and for each one, creates a new item in the list.

grantjenkins_6-1668647956958.png

 

And that's it - you should now have items been added into your list.

 

 

 

EXAMPLE 2 - NOT using JSON Schema

 

As an alternative option, you could bypass the Parse JSON and reference the fields directly. See full flow below:

grantjenkins_7-1668648800383.png

 

For the Apply to each, you would now just pass in value from List rows present in a table.

 

And for each of the values you would use the following expressions:

//Example
items('Apply_to_each')?['Excel_Column_Name']

//Actual
items('Apply_to_each')?['Title']
items('Apply_to_each')?['First Name']
items('Apply_to_each')?['Last Name']
items('Apply_to_each')?['Email']
items('Apply_to_each')?['Country']
items('Apply_to_each')?['Company']

 

This would give you the same result as option 1 and might be a bit easier for your scenario.


----------------------------------------------------------------------
If I've answered your question, please mark the post as Solved.
If you like my response, please consider giving it a Thumbs Up.

Thank you so much! Works like a dream!

Helpful resources

Announcements

Hear what's next for the Power Up Program

Hear from Principal Program Manager, Dimpi Gandhi, to discover the latest enhancements to the Microsoft #PowerUpProgram, including a new accelerated video-based curriculum crafted with the expertise of Microsoft MVPs, Rory Neary and Charlie Phipps-Bennett. If you’d like to hear what’s coming next, click the link below to sign up today! https://aka.ms/PowerUp  

Check out the Copilot Studio Cookbook today!

We are excited to announce our new Copilot Cookbook Gallery in the Copilot Studio Community. We can't wait for you to share your expertise and your experience!    Join us for an amazing opportunity where you'll be one of the first to contribute to the Copilot Cookbook—your ultimate guide to mastering Microsoft Copilot. Whether you're seeking inspiration or grappling with a challenge while crafting apps, you probably already know that Copilot Cookbook is your reliable assistant, offering a wealth of tips and tricks at your fingertips--and we want you to add your expertise. What can you "cook" up?   Click this link to get started: https://aka.ms/CS_Copilot_Cookbook_Gallery   Don't miss out on this exclusive opportunity to be one of the first in the Community to share your app creation journey with Copilot. We'll be announcing a Cookbook Challenge very soon and want to make sure you one of the first "cooks" in the kitchen.   Don't miss your moment--start submitting in the Copilot Cookbook Gallery today!     Thank you,  Engagement Team

Tuesday Tip | How to Report Spam in Our Community

It's time for another TUESDAY TIPS, your weekly connection with the most insightful tips and tricks that empower both newcomers and veterans in the Power Platform Community! Every Tuesday, we bring you a curated selection of the finest advice, distilled from the resources and tools in the Community. Whether you’re a seasoned member or just getting started, Tuesday Tips are the perfect compass guiding you across the dynamic landscape of the Power Platform Community.   As our community family expands each week, we revisit our essential tools, tips, and tricks to ensure you’re well-versed in the community’s pulse. Keep an eye on the News & Announcements for your weekly Tuesday Tips—you never know what you may learn!   Today's Tip: How to Report Spam in Our Community We strive to maintain a professional and helpful community, and part of that effort involves keeping our platform free of spam. If you encounter a post that you believe is spam, please follow these steps to report it: Locate the Post: Find the post in question within the community.Kebab Menu: Click on the "Kebab" menu | 3 Dots, on the top right of the post.Report Inappropriate Content: Select "Report Inappropriate Content" from the menu.Submit Report: Fill out any necessary details on the form and submit your report.   Our community team will review the report and take appropriate action to ensure our community remains a valuable resource for everyone.   Thank you for helping us keep the community clean and useful!

Users online (2,709)