cancel
Showing results for 
Search instead for 
Did you mean: 
Reply
Dlabar
MVP

Unmanaged to Managed Solutioning Strategies

I'm starting my first project where I'll be using managed solutions.  In my unmanaged solutioning, each release I create a new solution, and put only the changes that have been made into that solution, and then deploy through the environments.  Can I just do the same thing in a managed sense?  Or should I be having a single managed solution that contains everything that I deploy each release?  This seems wrong to me.  I know I could use patches, but then at what point should I do an Upgrade vs another patch? I've also seen different projects where solutions are divided up, Tables/code/other/etc.  Is that helpful as well?  And if so, how do I not create issues with inter managed dependencies?

1 ACCEPTED SOLUTION

Accepted Solutions

Why does it seem wrong to deploy the same managed solution with each release? This is the simplest to manage and I usually will try to use this approach. Many of the problems I see with managed solution deployments is not to do with it being managed - but because the managed solutions are being treated in the same way you might for an unmanaged - but they are very different.


In general - the smaller the number of solutions the easier to manage. Only segment where it has a real benefit and you are prepared to manage that solution in it's own deployment/version ALM.
The safest approach is to have each managed solution managed in it's own source dev environment/repo - with the other managed dependencies installed as managed pre-requisites. You might have a core solution and then a department solution - so you'd have a an environment to manage the core solution - and then another for the department solution that has the managed core solution installed. But you'd only want to do this if you needed to deploy the core to multiple orgs with different layered solutions, or you need to have a separate deployment/versioning ALM cycle for the different parts.

Segmenting is quite a complex area and there are some 'short cuts' such as having a single environment/repo to manage segmented solutions that have flows/apps and another solution for the core tables etc - but the safest is to have a separate environment for each segmenting managed solution.

Again - I can't stress enough - only segment where there is a benefit in terms of re-use or deployment/versioning autonomy. A good example of this is in the 1st party dynamics apps - where they have common solutions and then addition solutions for the various apps and even solutions for some of smaller features within the apps (e.g. apps/pcf/webresources/flows/plugins). This allows deployment/versioning of a very complex set of functionality in separate parts - and then potentially patched in parts - but there lots of complex management of the development of each of those solutions. If you are a small team then this approach is going to be hard for you to maintain and support.

 

The main reason people usually give for not having a single solution is the time it takes to deploy. There has been lots of work in this area and the import time is much better than it used to be for updates (not upgrades)- and constantly getting better.

Imho - unless you are a large organisation with many developers and a large number of environments/solution components with different release schedules,  the complexity introduced by solution segmentation does justify the benefit of the decreased import time. The only exception to this is around segmenting for components that don't introduce solution dependencies since they can be managed in the same environment. 

 

Patches are an option to reduce deployment time - but again it introduces more complexity and make ALM very difficult because you can't easily diff between solutions - for this reason I don't use them unless there is out-of-band deployment/hotfix. Once you've patched - you'll then need to do an upgrade at some-point.


From Create and update custom Power Apps solutions for ALM - Power Platform | Microsoft Docs:

"Using clone a patch and clone solution to update a solution isn't recommended because it limits team development and increases complexity when storing your solution in a source control system. "

 

Hope this helps

View solution in original post

11 REPLIES 11

Why does it seem wrong to deploy the same managed solution with each release? This is the simplest to manage and I usually will try to use this approach. Many of the problems I see with managed solution deployments is not to do with it being managed - but because the managed solutions are being treated in the same way you might for an unmanaged - but they are very different.


In general - the smaller the number of solutions the easier to manage. Only segment where it has a real benefit and you are prepared to manage that solution in it's own deployment/version ALM.
The safest approach is to have each managed solution managed in it's own source dev environment/repo - with the other managed dependencies installed as managed pre-requisites. You might have a core solution and then a department solution - so you'd have a an environment to manage the core solution - and then another for the department solution that has the managed core solution installed. But you'd only want to do this if you needed to deploy the core to multiple orgs with different layered solutions, or you need to have a separate deployment/versioning ALM cycle for the different parts.

Segmenting is quite a complex area and there are some 'short cuts' such as having a single environment/repo to manage segmented solutions that have flows/apps and another solution for the core tables etc - but the safest is to have a separate environment for each segmenting managed solution.

Again - I can't stress enough - only segment where there is a benefit in terms of re-use or deployment/versioning autonomy. A good example of this is in the 1st party dynamics apps - where they have common solutions and then addition solutions for the various apps and even solutions for some of smaller features within the apps (e.g. apps/pcf/webresources/flows/plugins). This allows deployment/versioning of a very complex set of functionality in separate parts - and then potentially patched in parts - but there lots of complex management of the development of each of those solutions. If you are a small team then this approach is going to be hard for you to maintain and support.

 

The main reason people usually give for not having a single solution is the time it takes to deploy. There has been lots of work in this area and the import time is much better than it used to be for updates (not upgrades)- and constantly getting better.

Imho - unless you are a large organisation with many developers and a large number of environments/solution components with different release schedules,  the complexity introduced by solution segmentation does justify the benefit of the decreased import time. The only exception to this is around segmenting for components that don't introduce solution dependencies since they can be managed in the same environment. 

 

Patches are an option to reduce deployment time - but again it introduces more complexity and make ALM very difficult because you can't easily diff between solutions - for this reason I don't use them unless there is out-of-band deployment/hotfix. Once you've patched - you'll then need to do an upgrade at some-point.


From Create and update custom Power Apps solutions for ALM - Power Platform | Microsoft Docs:

"Using clone a patch and clone solution to update a solution isn't recommended because it limits team development and increases complexity when storing your solution in a source control system. "

 

Hope this helps

View solution in original post

cchannon
Super User
Super User

Excellent topic that not nearly enough people take the time to dive into, IMO!

 

There are a lot of considerations to make when you jump over to managed solutions. It definitely introduces new risks to your process, but it also delivers some really excellent benefits at the same time.

 

My best advice is to start by thinking about the classic stratified visualization of solution layers (see my terrible MS paint skills below): As any component in Dataverse gets returned, it is evaluated against each of your solution layers in order, going from the oldest to the newest managed layers then finally the unmanged layer. 

cchannon_0-1617297223907.png

Layering managed layers therefore gives you the ability to have an almost polymorphic control of your solution components where you have mgd solution 1 define an entity "animal", then managed layer 2 says that animal is specifically of type "mammal" and managed layer 3 says that animal-->mammal is specifically of type "ape". This is a really powerful tool when you are mixing together customizations from multiple sources or teams, but if you only have one source definition (this project's dev environment) then it might just be overkill.

 

"but what about ease of deployment and only moving the newest customizations?" With an unmanaged solution you are always dumping to the same default solution so you can easily just add in the things you've changed and toss it from org to org, but with a managed solution this is less straightforward. You don't want to remove content from your solution just to minimize it (what a management headache!) but you also don't want a ton of managed solutions floating around your higher environments. Skipping right past the fact that it is ugly, when you have multiple managed solutions that contain references to the same components, it becomes very difficult to ever remove those components; say you have field X in mgd solution A and mgd solution B, now you want to remove it from the system. You remove it from A and go to import as an upgrade. but--oh no!--the import fails because you can't delete a component required by another managed solution. You remove it from B and try again, but still no luck because it is required by A! This kind of headache happens more often than you might think because a big advantage of managed solutions is that you can do these subtractive "upgrade" pushes which unmanaged solutions don't let you do. So, we're left with promoting this same managed solution again and again or, as you suggested, Patches.

 

Patches are an excellent tool, although I think they are no longer the 'preferred' approach of MSFT and may eventually be replaced with something more functional (that is still a long way off, don't worry). A Patch gives you the best of both worlds: it lets you stack managed layers on top of one another in a way that makes them very easy to remove if you want to back out a release, but also sets them up to be cleaned up eventually so you don't wind up piling up conflicts over time and making future subtractive releases unnecessarily difficult. When you generate a patch, what you're doing in your source system is creating a new unmanaged solution that is linked to your main unmanaged one, and when you export it and import it into the new org, it does not merge them, but stacks them so you get your changes, but can also easily back them out. The "Upgrade" you refer to is when you decide that all your patches are rock-solid, the solution is clean and there is no chance you will need to roll any of them out, so you are ready to again call it just one managed solution. You clone as Upgrade and it will consolidate your base solution and all patches into one Managed layer and upon importing into the new org it will do the same; flattening all patches and the base into one nice, clean managed layer again. (if it isn't already obvious, patches are the path I recommend)

 

So when do you consolidate your patches? That's really a question to be left to your project specifics and how you want to define a patch, but for the ISV products I manage, we do our releases in batches. We lump each of our 'preview' features as individual patches I can give out to customers to review and offer feedback and when we've incorporated feedback and are ready to finalize the release, we consolidate and issue a new major release.

 

Issues you'll have when using a patch:

  • The source is still all one unmanaged layer, so as tempting as it is to pretend that you can cleanly issue patches as individual features that can be installed in any order, this is often not really the case. I especially find a lot of headaches with ribbon customizations where it is an all-or-nothing add to the unmanaged solution. So, for example, if Patch A has a new ribbon button on some entity and Patch B is updating a different ribbon button on the same entity, there is no way to prevent these customizations (and their dependencies) from bleeding over into one another, so when it comes to things like this you will need to plan carefully and often subsegment your patches based on the schema they impact.
  • Patches cannot be subtractive
  • Patches lock down the core solution so that from the moment you make your first patch until you clone the solution again as an upgrade, you are dead stuck; you cannot even touch your core solution until you go all the way.

On balance, there are of course downsides to every approach, but I find that patches let me and my teams experiment freely with new features, knowing that we can always safely rip them out of an environment if there is an issue. This is an enormous value-add that I just adore, and good enough reason for me to recommend it to anyone.

@ScottDurow 's reply came in while I was writing mine so I am only seeing it now. I agree with everything he said, and I think we just have different perspectives on the cost/benefit of patches. Scott is totally right about the segmenting and not shying away from big solutions AND that patches add a lot of complexity, I would just say that the cost/benefit calculus changes a lot as the complexity of your feature releases changes. The more isolated features you are pushing out and the more you think you might need to pull them back, the more reasonable the costs of patches becomes.

ChrisPiasecki
Super User
Super User

@ScottDurow already touched on all the major points, so I won't repeat everything as I agree with with those statements.

 

I prefer to keep it as simple as possible and keep the solutions to a minimum. I think segmenting PA Flows is ok as they require a bit more work to deploy automatically with connection references. 

 

I've never liked patches or had much success with them so I no longer attempt to. As Scott mentioned, it makes it difficult to manage in source control and requires its own ALM steps.

 

---
Please click Accept as Solution if my post answered your question. This will help others find solutions to similar questions. If you like my post and/or find it helpful, please consider giving it a Thumbs Up.

BenediktB
Advocate I
Advocate I

I do agree with @ScottDurow and partially with @cchannon.

I always use managed solutions based on the pros that were discussed and you are probably fully aware of. Two major things for me is the possibility to roll back Upgrades (until the Holding solution was applied) and that it will clean up components that were deleted from the solution. So if you delete a field from Dev it will be deleted from the following environments as well.

 

I also rarely use segmentation because of the complexity which comes with it and Scott described very well. One example where it is useful though is when there are several teams working on your end solution. There one team might build the core function and other teams might build on top of it (a bit like Scott described). Then every team should have at least one environment and a separated segmented solution.

 

As I mentioned on twitter: The approach you use at the moment with unmanaged solutions isn't usable and not at all recommended with managed solutions. You would create a dependency hell (ass @cchannon described) where you will be unable to delete anything in the future.

When I started here in Sweden over 3 Years ago my first assignment was in a project where this approach was used. It also was a CRM 2013 onPrem installation, which made it worse since there was a lot of investments in this area in the last months. We had maybe hundreds of managed solutions in prod all referencing components from each other. It is basically unmanageable and not possible to clean up.

 

I have to disagree with @cchannon in regards to patches. I rarely use them (the only occasion is when I have a hotfix to deploy) because I find them nearly unusable when it comes to an automated deployment process. I have the following problems with it:

  • When it comes to an automated process the solution name should be always the same, otherwise one has to change the pipeline with every deployment which makes the whole point of automation worthless. When you create patches the names change for every patch.
  • The base/core solution a patch is created from has to be exactly the same in the source and the target environment. Otherwise the Patch can't be installed. One example: You are in production with version 1.0, you developed some features and have version 1.1 in dev as well as deployed to test, a bug is detected and you have to test it. When you create a patch on your current version you can't move it to production since the core version of the patch is higher than the core version in production. Yes, today you can change the version to whatever you want (even to a lower version) in the maker portal, but still, it feels awkward. I could see benefits when it comes to ISVs.
BenediktB
Advocate I
Advocate I

Since Holding solutions and Upgrades are only a thing with managed solutions you might not be aware of what I was referring to (and Scott as well). I tried to describe those a bit in one of my blog posts

https://benediktbergmann.eu/2020/09/20/apply-solution-upgrade-in-pipeline/#background

@BenediktB - Agree that Patch naming (and many other aspects of patches) could really use some love from MSFT. Those are definitely among the many reasons they outwardly do not promote patches as a best practice.

 

The key to making patches viable, IMO, is thinking of them more as an ethereal solution branch; either as a "hotfix that we will roll into the whole solution once we can test it better" or as a "feature we will roll into the solution once we know it isn't worthless" Granted, in both cases rollback isn't as simple as it could be (I have R-rated dreams about the ability to build a managed layer directly in an environ instead of importing it) but it is definitely much better than putting a temporary addition into one monolothic managed solution only to release it and decide "that was a bad idea".

BenediktB
Advocate I
Advocate I

I am still not convinced. How do you handle a normal release when you have a feature patch which is in development/testing for several sprints/weeks?

As you mention patches block the core solution. So they have to be short-lived. Which isn't possible when one uses them for feature development. At least I haven't seen it work.

 

We tend to create POC environments if we would like to test out new features or feature environment to develop them. When they are finished they get deployed to the dev environment and merged into our normal solution.

 

I am sorry, but I don't see patches there. Could be that I don't see the complete picture though.

filcole
Frequent Visitor

One caveat with the all in one solution approach is custom connectors. Currently it's possible to get SQL deadlocks and thus a failed solution install if a custom connector is included in a single solution alongside its dependent components. See https://docs.microsoft.com/en-us/connectors/custom-connectors/customconnectorssolutions#known-limita...

Helpful resources

Announcements
UG GA Amplification 768x460.png

Launching new user group features

Learn how to create your own user groups today!

Community Connections 768x460.jpg

Community & How To Videos

Check out the new Power Platform Community Connections gallery!

M365 768x460.jpg

Microsoft 365 Collaboration Conference | December 7–9, 2021

Join us, in-person, December 7–9 in Las Vegas, for the largest gathering of the Microsoft community in the world.

Users online (3,227)