cancel
Showing results for 
Search instead for 
Did you mean: 
Reply
Dlabar
MVP

Plugin Class/Assembly Management/Organization for ALM

In most projected with Dataverse Plugins, there will be a need for more than one plugin.  This brings up challenges for how to organize the classes/assemblies, and normally will result in having to strike a balance between maximizing plugin deployment segregation, (A single plugin class per plugin assembly) and ease of deployment/visual studio project management (All plugins in a single assembly).   The former allows for easily managing what plugins are being updated, and the later simplifies ALM process, since only one plugin registration is required.

 

When dealing with automating ALM plugin updates/registrations, I'm assuming that it makes most since to have a single plugin assembly.  Is this assumption correct?  Why or why not?  Are there other conventions that are used?

1 ACCEPTED SOLUTION

Accepted Solutions
KimB
Advocate I
Advocate I

We are using the same approach: one plugin project per solution.

 

We use namespaces to further structure the plugin classes. One plugin class has one defined functionality, making it easier to unit test, disable or remove steps or the class later.

 

For ALM, we use solution packager and the mapping functionality to extract DLL from the unpacked solution and inject the build result after packing.

 

Consider this article as well: https://docs.microsoft.com/en-us/powerapps/developer/data-platform/best-practices/business-logic/opt...

 

View solution in original post

9 REPLIES 9

Having multiple plugin assemblies only makes it more complex because you have to build them all and copy to the package folder in your build pipeline. I generally have a single plugin assembly per solution - if I am segmenting the solutions then I will have a plugin assembly per solution (if needed) and build/version/release them separately.

KimB
Advocate I
Advocate I

We are using the same approach: one plugin project per solution.

 

We use namespaces to further structure the plugin classes. One plugin class has one defined functionality, making it easier to unit test, disable or remove steps or the class later.

 

For ALM, we use solution packager and the mapping functionality to extract DLL from the unpacked solution and inject the build result after packing.

 

Consider this article as well: https://docs.microsoft.com/en-us/powerapps/developer/data-platform/best-practices/business-logic/opt...

 

View solution in original post

DianaBirkelbach
Super User
Super User

Hi @ScottDurow , @KimB ,

We actually have one pluginproject  pro entity/process., because usually they are developed by different developers. We have unit tests, but that's not a replacement for real tests. Do you have dedicated environments pro developer? Otherwise could be hard to test in the same time, given a single assembly,
I know it's recommended to have dedicated environments pro developer, but it's quite an overhead, and would be interesting to hear abour your experience.

Kind regards,

Diana 

@DianaBirkelbach I think you are referring to the challenge that when two or more developers are updating the same plugin assembly and deploying it to the same environment then they can overwrite one another's changes. This of course is not just an issue for Plugins (it happens for Flows, Canvas Apps etc.) but as the size of your Plugin grows, then there is more likely to be a collision. Even if you are using feature flags (which I recommend), this still doesn't solve this issue since it's a deployment collision of different local versions.

It certainly is an important consideration - I find that it's more flexible to decide how my solution/plugins are versioned/deployed separately to how they are being developed. I don't want to end up with multiple Plugin Assemblies simply because two developers were working on a Plugin at the same time. And even if you segment your Plugin Assemblies by Entity - you can end up with the same problem (but perhaps less frequently).

 

The strategies I take to address this challenge are:

 

1. Multiple environments per feature being developed - then leave it up to the feature pod to decide on how they manage concurrent development so that they don't interfere with each other. Since the feature pod usually only consists of a smaller number of developers and they are isolated from other parallel work in their own branch/environment I find this works well. The changes are only then integrated into the build when the branch is merged.

 

2. Test Locally before Deployment and Integration - Create Integration tests to run your plugins against an Environment to test your work locally (using a Mock Pipeline connected with a real OrganizationService) before it is integrated into the development environment. There would be someone (or ideally a nightly build/deploy process) in the pod that is responsible for building and deploying to the development environment. The developers who worked on the Plugins should have a high degree of confidence that their plugin logic works since they've tested it locally against the Dataverse environment.

 

3. Develop in temporary Plugin Assemblies - If you know that you are likely to collide with some other development (and you really should know what's happening in the environment you are developing against at all times - if you don't then there is a whole different problem!) - you can develop and deploy your plugin code separately to the solution Plugin Assembly to test it during development - once you are happy, you delete the development version and integrate the code into the main solution Plugin and commit it so it can be built and deployed when the pod is ready. 

 

I've used all of these techniques (often in combination) and they work well - and mean you don't have to change your ALM solution deployment strategy to accommodate the way that developers are working and the parallel work that is being performed.

 

The most important here is communication between all developers so that everyone who is working on the same Development Environment knows what is being worked on any by who - daily stand-ups make this easy.

 

Hope this helps!

 

DianaBirkelbach
Super User
Super User

Thank you so much @ScottDurow  for taking the time to share your experience and thank you @Dlabar for bringing this up!

I'm somewhere in between all this, still looking for the perfect world.🤔

 

1. Multiple dev environments are the perfect world indeed, but this means an overhead in continuous syncing the customizing  and test data from main "dev" environment, where the consultants, solution architects and sometimes the customers are making the customizing.

 

2. So you trust a mocked integration test, without deploying and testing the registration, definition of filtered attributes and images, the interaction with  other plugins (even the plugins on another entity could lead to a ping-pong effect). I wish I could have the confidence. Maybe it's because in the big projects, as a dev,  I don't know everything what's happening (or because I'm too much of a control freak 😁  )

 

3. This idea of temporary assembly is interesting. Do you keep the temporary project for later, in case a change request will come later on? Do you make the plugin based on some kind of shared projects?

 

Definitely a lot of ideas to have a second thought. Thank you again, Scott! 

 

Kind regards,

Diana

 

BenediktB
Advocate I
Advocate I

I usually go with the same approach as @DianaBirkelbach does. I create an Assembly per Entity/Table.

 

My reason for that is not mainly that different developers could override their changes.

Which as mentioned could be a problem, but a problem that could be fixed as @ScottDurow explained.

My main reason is to have my solution manageable. If you have a big implementation with hundreds of plugins and all of them are in the same assembly it gets hard to deploy plugins. That's because there are always changes in some plugin that weren't tested by the customer. Of course one could (and maybe should) use feature flags. But even then you changed the code which should be tested by the users. 

I thought you had a single plugin assembly per solution?

No, I usually split them by Table.

nicknow
Frequent Visitor

One ideas, to build on @ScottDurow's suggestion of a temporary  assembly is to use Shared Projects in your solution and then have multiple projects referencing the shared project to generate different DLL outputs (for production vs local dev solutions.) This also helps solve @BenediktB's concern about feature flags (something I agree with Scott about using) and untested code.

 

You can divide it up by table or functional area or developer or whatever makes sense for your project.

 

But the result is the same. You have a shared project in VS that contains the plugin code itself. Then you have two shared projects that reference that plugin code. One, call it tester, only references a single shared project (could be more than one depending on the dev strategy, but less than all of them.) The other, call it production, references all the shared projects - resulting in a single DLL output (because shared projects result in the code being incorporated into the build instead of generating individual DLLs.)

 

This is how it looks in VS (there are three different solutions):

 

MyProject.Account [Solution]
-> MyProject.Account [Shared Project]
-> MyProject.Account.Dev [tester] (Reference: MyProject.Account)

 

MyProject.Contact [Solution]
-> MyProject.Contact [Shared Project]
-> MyProject.Contact.Dev [tester] (Reference: MyProject.Contact)

 

MyProject [Solution]
-> MyProject.Plugins [Production] (Reference: MyProject.Account, MyProject.Contact)


When I build MyProject.Account, as a developer, I get just those plugins in a DLL, MyProject.Account.DLL, and can test them all I want locally. And if I have a single environment for dev testing I can test them in that environment without impacting other developers. (same goes for MyProject.Contact).

 

Now, when you get to a production build and test (automated or not) I build MyProject and get just one DLL, MyProject.Plugins.DLL.

 

Shared projects also work great for common functionality across multiple plugin DLLs (whether for the same project or for different projects you have going on.)

 

Plus, because each project is its own Git repository you can have branches, only building from the production branches for the production DLL output, but using a dev branch for tester DLL output.

 

Shared projects are an underutilized capability for plugin development, assuming you don't want to use ILMerge/ILRepack (which are not officially supported by Microsoft for plugin development.)

Helpful resources

Announcements
PA User Group

Welcome to the User Group Public Preview

Check out new user group experience and if you are a leader please create your group

secondImage

Demo Extravaganza is Back!

We are excited to announce that Demo Extravaganza for 2021 has started!

MBAS on Demand

Microsoft Business Applications Summit sessions

On-demand access to all the great content presented by the product teams and community members! #MSBizAppsSummit #CommunityRocks

Power Apps June 2021

June Power Apps Community Call

Did you miss the call? Check out the recording here!

Users online (18,797)