Tuesday, August 9, 2022

Distributed System Design with .Net ecosystem and Azure - Part 2

In my previous post in this series, I discussed about high level requirements of the client and technology stack that we choose to provide the solution as well as the primary data structure and a bit about the data flow design. If you haven't red, I highly recommend you to read that to get the context of this article.

Redis DB as a primary Database

In order to understand how we arrived at consensus in terms of making a decision to choose redis as a primary database, there is a bit of history. 

Our client basically had an existing desktop based system which was a .net win forms application using redis as a database. Honestly, this is the first time where I saw redis used as a primary database. I had mixed feeling seeing this. The design looked fascinating and scary at the same time.

The reason to worry is data persistence! 

Redis is basically an in-memory data structure store. In layman terms, what it means is, if you write a piece of data to redis, it will be stored in RAM and not on disc. Hence its majorly used as a cache rather than database. Then I learned that there are configurations in redis using which you can choose to persist the data in disc. By default, it persists data to disc with snapshotting strategy. However, you can choose to persist using AOF strategy or both combined. 

When I looked at the configuration of the existing system, it was configured to use snapshotting strategy with the configuration value save 20 1(i.e, take a snapshot every 20 seconds if there is at least 1 key change).

Still it was not convincing enough to use this as a primary data store, because if you go with above and your system happened to crash, in the worst case you would loose 20 seconds of data. And in sports, every second matters! specially when you are an official governing body of a specific sport (I cannot reveal the name here due to NDA constraints). 

Unfortunately, it was not possible to speak to any of the technical people who developed this system. The system was however running successfully in production for couple of years.

Further digging of all the materials shared by client, I could figure out that there was multiple instance of redis which would run as a backup system and receive the data with pub-sub. With this you could achieve certain degree of data safety in terms of durability. If not perfect, may be its a good enough kind of safety for that occasional unfortunate failure scenario. 

But as you guessed, the performance was blazingly fast!

Now this is a benchmark for our cloud based solution which had all the feature of the desktop + some feature enhancements + integrating with external systems with sub-second latency.

So I definitely didn't wanted to move away from Redis but also introduce further measures to ensure durability. After doing some research we arrived at following configuration - 

  1. Kept the snapshotting configuration as is, which was with the value save 20 1 (i.e, take a snapshot every 20 seconds if there is at least 1 key change)
  2. Enabled AOF strategy with the value appendfsync everysec
  3. Enabled master-slave replication with asynchronous configuration
Note: The master-slave replication we used here is not for the purpose of syncing data from on-premise to cloud or vice versa. It was only to have a local backup of data in the case if our primary system hosing the redis crashes.

With this setup, in the worst case, maximum we will loose 1 second of data for which we got approval from client.

This was one of the key reference article which helped me to get some better insights about these configurations.

Why not redis in-built replication to sync data from On-Premise to Cloud and vice versa?

In order to understand this, its important to know following facts of the application -

  1. In redis server, each database is just a logical separation.
  2. In our cloud redis server, each logical redis database holds the data of one specific tournament.
  3. The on-premise redis server holds the data of only 1 tournament which is running on that location.
  4. The on-premise and cloud runs in a toggling active-passive mode for a tournament. Meaning, at any given moment of time for a give tournament, the writes can either happen to cloud or on-premise but not to both.
  5. The entire setup runs in linux environment in a docker engine both on cloud and on-premise.
Note: Above constraints were imposed by us by considering various application features and requirements. Something that we agreed with client.

Diagrammatically it looks something like this - 


So, basically following are the challenges to use in-built replication -
  1. The on-premise needs only 1 tournament data but cloud deals with multiple tournament data, each in its on logical db.
  2. Since there are multiple on-premise instances at various physical locations, each running a single tournament, there is no clear single node which we can consider as master node.
  3. Since everything runs inside a docker engine, there is additional overhead in the configurations in terms of network mapping.
Hence we decided to implement the data sync logic explicitly. In short, this is how it was implemented -
  1. Cloud to on-premise: This is a straightforward download of data from cloud to on-premise for the specific tournament. The on-premise db will get overridden by the backup file.
  2. On-Premise to Cloud: Each on-premise redis transaction is delivered to cloud azure redis using azure webjob via Kafka. Basically, on-premise will push the transaction messages to Kafka, and azure web job in cloud has a logic to consume these messages from Kafka and commit it to redis db in the cloud.

Why Kafka?

Our application needed to integrate with couple of external systems, including data pull and push from our systems. Our consumers reside in various geographical areas and hence we needed a platform which has inherent support for geographical distribution and can deliver data with sub second latency. We considered couple of options for this. They are - 
  1. Azure Cosmos DB - This is a cloud only solution and that's the main reason to drop this for our application. Also, its more of a distributed storage system rather than a message streaming platform. Hence if we go with this, we needed additional other PAAS service or custom implementation to deliver the data to external systems.
  2. Azure Event Hub - This was one of the toughest decisions to make because both Kafka and Event Hub had almost equal capabilities. Compared to Kafka, Event Hub was fairly new to the market. Kafka is already a matured framework with wide community support and our client was more inclined towards Kafka due to this reason. For me the only worry was how well it cops with rest of technology stack. Fortunately, it had good C# client libraries to work with.
  3. Redis Enterprise - This was again a more promising choice. It too had inherent capabilities of scale along message delivery capabilities. Since redis is our primary database, it made more sense to go for this but unfortunately their licensing strategy was not aligned with how our application was designed to operate. It was becoming insanely expensive making us to drop this out.
During the phase of making these decisions, we were in continuous touch with pre-sales teams of both Microsoft, Redis Enterprise and Confluent Kafka. Everyone was proposing different architectural solutions with different tech they have to offer. For us, its Confluent Kafka which met all our criteria. We did couple of POCs and the results were excellent. We proceeded with this dropping out other options.


The overall message delivery was within a second for several concurrent matches. It was a good and satisfactory outcome.

Where does SQL Server fits in?

If you went through my previous article, you can find the mention of SQL server. We have basically used SQL Azure to maintain user details, roles, permission and controlling aspects of active-passive switch of redis. Many of these details were required for on-premise as well but to avoid additional overhead on on-premise we just used another redis db. This data was only updatable from cloud counter part and needed only as a read-only for on-premise (except for few tables which anyways need not to sync back to cloud). Technically, we achieved this by having a unified repository interface with concrete Redis and SQL implementations.

Its worth mentioning the various technical scenarios covered related to data synchronization including, recovering from app crash, intermittent internet disconnection issue, switching over to different machine when primary machine is not recoverable etc.

Will cover these implementation aspects in the next article. Stay tuned!


Thursday, July 28, 2022

Distributed System Design with .Net ecosystem and Azure - Part 1

Its been a year since we finished one of the software related to sports domain. It was a very interesting yet challenging project which we developed and deployed successfully at Aykan. I am glad that I got the opportunity to architect this fascinating project along with another senior person Sudhir Garg (IIT Roorkee Alumni). It had couple of interesting problems to solve. Though the overall project requirement is huge and not in the scope of this article, here is the key summary points from solution architectural stand point.

  1. Should develop a sports management software which would work both offline and online. In other words, on-premise and cloud. 
  2. The offline data should sync with cloud instantly or eventually depending on the internet availability at on-premise system. 
  3. The software should be integrated with couple of external systems. Both to consume and send data. 
  4. The live match data delivery latency should be less than a second. 
  5. The on-premise software should be highly available so that the live match data entry is never interrupted (This is the primary reason why on-premise requirement came in the first place! because historically internet is not reliable at various stadiums). 
  6. The on-premise software should have mechanism to integrate with display devices available at the stadium.
  7. The on-premise system should have the ability to send data to local consumers.
  8. The cloud version should be geographically distributed and can have data consumers all over the world.
If we draw a diagram for this requirement, it would look something like this


From the summary, its pretty evident that we need a distributed solution with the requirement favoring AP over C of the CAP theorem. That is Availability and Partition tolerance over Consistency (not to confuse with consistency guaranteed in ACID database transactions. Its the consistency of the distributed systems that we are talking about here. Specifically its the data synchronization of on-premise and cloud counterpart of the software here).

From the application functionality stand point both on-premise and cloud counterparts had almost same features except very few which was specific to either on-premise or cloud. So with that in mind we decided to go with same source code for both on-premise and cloud implementation.

After months of research, brainstorming and POCs measured across various metrics such as performance, maintainability, external integration aspects, cost, time to develop, skilled resource availability at hand etc., we finalized following tech stack for the implementation - 

  1. Asp.net Core 3.1 with C# for backend development.
  2. Angular for front-end development.
  3. Redis as primary DB - Yes, you heard it right - Its not used for caching purpose here! but as a primary sports database with persistency enabled (will have an article dedicated for this topic later).
  4. SQL Azure for cloud - To store some metadata information.
  5. Azure Functions - For some server less computations.
  6. Azure Storage - To store images, videos and other binary data.
  7. Apache Kafka as a message broker between on-premise, cloud and external systems.
  8. Azure Web Jobs for some long running background work which was not suitable for Azure Functions.
  9. Signal-R - To push real-time data from server to angular front-end.
  10. Docker with Linux as a runtime environment.
I will discuss about the thought process behind these choices as we move on with each topic.

Data Structure and Manipulation

How we structure our data plays an important role in any system design. Hence our first focus was to formalize a structure which was easy to work with yet have all the necessary properties for syncing purpose.

Basically we created an object structure to represent a single transaction unit which is composed of header and body parts. The header section contained metadata of the transaction such as timestamp, transaction id, tournament id, etc., where as the body part is a collection of real data along with type of an operation(it can be either set or delete - this is the only 2 write operations possible with redis hash table and that's our primary database - the reason for choosing this will be covered in the upcoming articles).

In other words, one transaction unit in our application is fundamentally composed of multiple redis operations spanning across multiple hash tables.

We also made a decision that for one API call, we will have only one redis transaction commit. This guarantees that all the operations performed in a single web request is atomic in nature. Any failures in this will end up as an error to the invoker and retry can be performed.

To ensure above design is strictly followed by all application developers in the team, we enforced it with the combination of generic repository pattern without save/commit feature and decorator design pattern or more practically speaking Middleware feature of Asp.Net Core.

Here is the diagrammatic representation of the same - 

















The diagram is an oversimplified version of the real implementation. Showing only the components essential for this discussion.

This is a foundation stone of the overall solution.


Friday, November 30, 2018

Scoped DI Provider for Azure Function

The azure function does not provide an in built support for dependency injection (at least at the time of writing this article). There is however an user voice ticket marked as started at this point.

Currently, there is a good nuget package available using which you could bring DI support with little effort as mentioned in its repo home page. The author of this package Boris Wilhelms, also wrote a blog post mentioning the working of this package if you would like to understand how it works.

Anyways, I was however looking for even simplified workaround until Microsoft provides native support.

So, here is the approach that I followed.

Created following wrapper class for DI:

public static class FunctionScopedDIProvider
{
    private static IServiceProvider serviceProvider;
    private static readonly object locker = new object();

    public static async Task Using(Func<IServiceProvider, Task> action)
    {
        if (serviceProvider == null)
        {
            lock (locker)
            {
                if (serviceProvider == null)
                {
                    serviceProvider = BuildServiceProvider();
                }
            }
        }

        var scopeFactory = serviceProvider.GetService<IServiceScopeFactory>();
        using (var scope = scopeFactory.CreateScope())
        {
            await action(scope.ServiceProvider);
        }
    }

    private static IServiceProvider BuildServiceProvider()
    {
        var services = new ServiceCollection();
        
        // TODO: Do your registrations here.
        // Ex: services.AddScoped<Imyservice, Myservice>();
        
        return services.BuildServiceProvider();
    }
}

Which can be used like this:

[FunctionName("Function1")]
public static async Task Run([QueueTrigger("myqueue-items", Connection = "")]string myQueueItem, ILogger log)
{
    await FunctionScopedDIProvider.Using(async provider =>
    {
        // The provider here would work as expeceted including scoped lifetime.
        // Ex: var myService = provider.GetService<IMyservice>();

        // TODO: You function logic goes here. 

        // A mock task completion as we don't have a real awaitable task here.
        await Task.CompletedTask;
    });

    log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
}

Note: The Using function of FunctionScopedDIProvider above not necessarily need to be async and so is the parameter. It can simply be an action delegate instead of function delegate.

This is how the non async version signature would look like.

public static void Using(Action<IServiceprovider> action)

Are you wondering why we need to do this, when we can easily get the DI provider by calling BuildServiceProvider method of ServiceCollectio? Well, its because singleton and scoped object liefetime support of the DI.

In other words, if you create new ServiceProvider each time, you won't get singleton behavior and if you reuse the ServiceProvider then you won't get scoped behavior (The reuse can be done by keeping it in a static variable).

So the solution that I come up with is by creating a static ServiceProvider  in a thread-safe manner and creating new scope each time with the help of in-built IServiceScopeFactory.

We can actually keep this stuff in the Run function's body itself but that's a cluttering code and not re-usable as well. So, created a nice wrapper with delegate using pattern (refer the second point in the article, titled - Measuring various function/method execution using StopWatch).

Enough talking, here is the full code. Enjoy your day!

Important: This code doesn't work if you would like to do the DI for function entry itself. For that you should use the solution provided by Boris Wilhelms which I mentioned in the beginning of this blog.

Sunday, September 27, 2015

Asp.Net MVC Intellisense for ViewBag

Introduction/Problem Statement: If you are a typical MVC developer, you might have used ViewBag or ViewData to pass data from Controller to View at times. Also, it is well understood that ViewBag is a dynamic type and hence you will not get intellisense for this. Again, it is a similar case with ViewData which is a dictionary instead and hence no strongly typed support and needs to deal with magic strings.

Workaround: We can easily create our own workaround for this. Here is a sample -
/// 
/// A class to hold the data which can be used to interact between controller and view.
/// 
public class ViewBagHelper
{
    /// 
    /// Gets the current instance of 
    /// 
    public static ViewBagHelper Items
    {
        get
        {
            if(HttpContext.Current.Items["item"] == null)
            {
                HttpContext.Current.Items["item"] = new ViewBagHelper();
            }

            return HttpContext.Current.Items["item"] as ViewBagHelper;
        }
    }

    // TODO: Add your intended ViewBag/ViewData properties here. Ex: To hold an id, do this
    // public int Id { get; set; }
}

Now anywhere you want to use ViewBag just do this - 

ViewBagHelper.Items.MyItem

Where, MyItem is your item name, which is basically a property of ViewBagHelper class. While using this in the View, make sure that you are using fully qualified name or you added the appropriate namesapce in the MVC's Web.Config file. For example, if my helper resides in the following namespace 'ViewBagIntellisense.Helper', then your razor config section of Web.Config should have this - 
  
    
    
      
        
        
      
    
  
The idea is not new, people often do this to avoid magic strings. For instance, 3 years back I myself asked the stackexchange community to review similar pattern. Well, it is still in unresolved state because the answers were not convincing enough. However, I continued to use this pattern in various projects and it worked well for me. Even you can find it in one of my old article

Downsides: The one potential downside which I can think of is, it shares various state representations across the application. However, still the memory footprint will be considerably low, as reference type properties are get initialized only on need basis and value types are inherently lightweight (Off-course, Unless you create one!). 

Recommendation/Best Practice: It is always good to have a Model/ViewModel to exchange data between controller and view. Use this pattern (Or ViewBag in general) only if you have some cross-cutting scenarios where putting the data in Model/ViewModel doesn't make much sense or you have only one piece of data to transfer.

Tuesday, July 30, 2013

MVC 4 - WCF - Add Service Reference Generates Blank Proxy

After a long time I created a WCF Service with ws2007HttpBinding which has a complex data model (with inheritance involved). I created the service as usual with the well known configuration and hoping it to work. I just run the application and it ran without any error. Bingo! wow great, I created a service in single go!

Now its time to create a client application. The client I was trying to create is MVC 4 web application (which I am most familiar with). The app was already present and I just need to add a service reference to consume my wcf service. As usual I just right clicked the project, clicked on AddServiceReference, entered the service url, given a friendly name to the service and clicked Ok. Wow! I'm ready with a client for my service. Now its time to call the service method by creating an instance of the client proxy. Being a lazy guy waiting for the visual studio intellisense to show up the given friendly proxy name by typing its initial few letters. Unfortunately, intellisense was not showing anything. OMG! need to wake up now.

Started to trace to identify what went wrong:
            First step I did is checking the proxy code (because I can't relay on intellisense for 100% accuracy. There are times, when everything is fine but intellisense itself is having problems). Well, I did that by clicking on the marked places below:

I noticed that the proxy classes are not generated and the Reference.cs file is blank:

Now my mind also became blank! I was following all the holy steps but still not able to wok as expected. Went back to my WCF application and started to verify following things:
  • Whether all my required data structures are decorated with DataContract (classes) & DataMember (properties) or not.
  • Whether the base class is decorated with DataContract or not. 
  • Whether the KnownTypes are set or not.
Everything was looking fine there but still proxy is not getting created. Frustrating!!!

Decided to check the service with wcftestclient (vs command prompt -> wcftestclient -> Right click & click on Add Service -> Enter service url -> ok) tool. 


Wow, great! it works here. Then, what might be the problem with my MVC 4 application. Just got another thought. Created a new console application and added the service reference in the same was as I did in MVC 4 app. Amazing, worked with single go.

Now the thing is pretty clear that there is something wrong in MVC 4 app. Its time find the solution (well just a google search will do it).

Solution:
Go to, configure service reference.


Select the option, Reuse types in specified referenced assemblies -> Check all except Newtonsoft.Json -> ok

Thats it. The proxy will be created now. We can do this while adding the service reference itself, by going to the Advanced window.

Note: Microsoft has fixed this issue in this update: http://support.microsoft.com/kb/2750149

The Cause (by microsoft): 
This issue occurs because the DataContractSerializer class has encountered a type 
(Newtonsoft.Json.Linq.JToken) that it does not support. In this case, it throws an exception, 
and then stops generating the service reference.

Conclusion: Since I am working in WCF after a long time I was continuously thinking that there is something wrong in my service and spent a lot of time to figure it out. Also, I missed one basic step in my tracing with which I could have saved my time (i.e. checking the Error List). Hope this article helps to someone else in future.

Out-of-interest, the error list was showing this error & warning.

Error:Custom tool error: Failed to generate code for the service reference 'ServiceReference1'.
Please check other error and warning messages for details.

Warning:
Custom tool warning: Cannot import wsdl:portType
Detail: An exception was thrown while running a WSDL import extension: System.ServiceModel.Description.DataContractSerializerMessageContractImporter
Error: Type 'Newtonsoft.Json.Linq.JToken' is a recursive collection data contract which is not supported. 
Consider modifying the definition of collection 'Newtonsoft.Json.Linq.JToken' to remove references to itself.

Sunday, June 9, 2013

Assembly Reference - Best Practice

Introduction: 
             
               Its obvious that the applications do have references to one or the other assemblies to accomplish their job. The CLR looks for the referred assemblies in global assembly cache, any private path added to the app domain or one of the special folder called bin (i.e. current directory of the executing assembly).
             
             While working with Visual Studio IDE, we refer lots of related libraries using add reference tool. When we compile the code, the referred assembly files are copied from the source location and placed into the special directory called bin, so that the application can find those assemblies at runtime and load them into memory. If we refer the assemblies wrongly, then there will always be a configuration problem which we encounter, when we move the application from one location to another. This article discuss about those issues and possible solutions by providing some best practices.

Background: 
         
            It was very unfortunate situation that, my system has changed 3 times in last one month. Still there is one more chance to change! Each time the system has changed, I had to setup the currently working project. The project uses Microsoft Enterprice Library and some assemblies in it. The problem with the setup is that, the project fully relies on GAC for these assemblies and the bin folder of our project is readonly by default. Hence, the compiler is failing to copy the files from GAC to bin. Also, I forget to remove readonly attribute from the bin folder each time, causing the application to fail on runtime by giving following error, or sometimes some other error with InnerEdxception as this:

Could not load file or assembly 'xxx' or one of its dependencies.
The system cannot find the file specified.

Fixing the above issue is simple though, it wastes our time which we can save otherwise. This and my previous some other experience made me to come-up with a best practice to avoid this problem.

Wrong Approach: 
  1. Referring assemblies from bin directory.
  2. Not referring anything and making the assembly available via bin folder.
  3. Relying on GAC for almost every assembly.
The first and second approach is completely deprecated(NEVER DO THAT). Third approach is okay though, personally I prefer to have my own copy(because, its good for portability - specially in web-applications). Usually, placing the assemblies in GAC is preferred only for shared assemblies.

What is bin folder?

         The bin stands for binary. By default, in visual studio it is configured as the output folder for your build code.


In Visual Studio, you can change this to any other directory to obtain the build code(PE file). When you right click to your project or solution in visual studio and click on 'Clean', it clears the output directory and as soon as you build your project, new build files will be placed there. Ususlly, the new build files will  override the existing files in output directory when we rebuild our projects but sometimes there will be some glitches & we need to execute the clean command explicitly or we may need to delete the output folder from file-system manually.

Why referring and placing the assemblies on bin is bad idea?

         Since bin is an output folder, each time you build your code the new files will override the existing one in bin. However, this won't cause any problem to us but there are situations like:
  1. Sometimes Visual Studio takes older version assemblies for execution and we may need to delete the bin folder to overcome from this issue.
  2. We may need to remove bin and obj folder to reduce the size while porting the application.
As you can see, there are situations where bin folder needs to be deleted or recreated. Hence, it is not good to depend on this directory for required assemblies. 

Best Practice:
  1. Within the applications root directory, create a folder called Lib, Library or whatever you prefer.
  2. Place the required assemblies there.
  3. Refer them in the application using Add References tool of visual studio.
This way you can always be safe that required assemblies are exists and available all the time. Also, you can delete or recreate the bin folder without any issue. If you follow this simple best practice, no other configurations are required to setup your application on any machine.

NuGet Package: 
           
            Before I conclude my topic, here is an interesting tool to introduce (You might already been using it, if you work with MVC projects - at least to setup your project). NuGet is a tool, which helps you to port your application easily by reducing or completely eliminating the configuration burdens.

          Even this tool uses the same pattern for assembly reference. It creates a folder called packages on the root of your application and modifies your project file to refer the assemblies from that location.

Conclution: It is bad idea to refer assemblies directly from bin folder. Rather, I recommend to create a dedicated folder for this purpose and refer the assemblies from there. This enables you to port and configure the application easily.


Thursday, February 28, 2013

Tips and Tricks of Exception Handling in .Net - Part 3

This is the continuation of my article Tips and Tricks of Exception Handling in .Net - Part 2

Fail Fast Method: There are situations where your application's state is very bad and no code should be executed further - not even finally block code and your application needs to be closed immediately. How you do this?
           The answer is Environment's FailFast method. As the name suggests, this  is a special method which causes your application to fail immediately. It also creates a dump and event viewer entry.
 try 
 {
     Environment.FailFast("Text to be logged");
 }
 finally
 {
     Console.WriteLine("This finally block will not be executed.");
 }
Compile above code and execute the generated .exe you will see the following windows prompt, which you get usually when your application is having unhandled exception.















Process.GetCurrentProcess().Kill() vs Environment.FailFast(""): The finally block will not be executed even if you use Process.GetCurrentProcess().Kill() and this also ends your application immediately, then how it differs from FailFast? well, the difference is - FailFast will create a dump and event viewer entry but killing the current process will not.

Note: The both should be used with extensive care. In real-world these are required very rarely.


Corrupted State Exceptions (CSE): These are the exceptions which cannot be catched. Behind the scene Environment's FailFast method throws one of these exceptions. Hence, it cannot be catched and your application ends with an unhandled exception.

Here is the explaination of CSE by Jffrey Richter's words:
      Usually, CLR considers few exceptions thrown by native code as Corrupted State Exceptions, because they are usually the result of a bug in the CLR itself or in some native code for which the managed developer has no control over. Here is the list of native Win32 exceptions that are considered CSEs:

EXCEPTION_ACCESS_VIOLATION        EXCEPTION_STACK_OVERFLOW
EXCEPTION_ILLEGAL_INSTRUCTION     EXCEPTION_IN_PAGE_ERROR
EXCEPTION_INVALID_DISPOSITION     EXCEPTION_NONCONTINUABLE_EXCEPTION
EXCEPTION_PRIV_INSTRUCTION        STATUS_UNWIND_CONSOLIDATE

By default, the CLR will not let managed code catch these exceptions and finally blocks will not execute. However, Individual managed methods can override the default and catch these exceptions by applying the System.Runtime.ExceptionServices.HandleProcessCorruptedStateExceptionsAttribute to the method. In addition, the method must have the System.Security. SecurityCriticalAttribute applied to it. You can also override the default for an entire process by setting the legacyCorruptedStateExceptionPolicy element in the application’s Extensible Markup Language (XML) configuration file to true. The CLR converts most of these to a System.Runtime.InteropServices.SEHException object except for EXCEPTION_ACCESS_VIOLATION, which is converted to a System.AccessViolationException object, and EXCEPTION_STACK_OVERFLOW, which is converted to a System.StackOverflowException object.

Note: Even with the attribute HandleProcessCorruptedStateExceptions, we cannot handle the following exceptions, for a given reason:

  • StackOverflowException - As this is a hardware failure and there is no more stack available for further processing (Thanks Abel Braaksma for pointing this out).
  • ExecutionEngineException - It occurs because of heap memory corruption and hence cannot be handled further (Reference).

Summery:
  • Structured Exception Handling - This is the exception handling mechanism which is offered by Microsoft Windows and .Net Framework Exception Handling is built on the top of it.
  • What can be thrown? - The CLR allows an instance of any type to be thrown. But CLS mandates to throw only the Exception derived objects. The C# compiler allows to throw only exception derived types.
  • Throw different exception than the originally thrown one, when you want to maintain the meaning of a methods contract.
  • Make sure to set original exception as inner exception when you throw a different exception.
  • Constrained Execution Regions - Make sure that your catch or finally block will never throw an exception by making use of CER. You can use it by calling RuntimeHelpers's PrepareConstrainedRegions method and applying ReliabilityContract attribute wherever required.
  • The C# language automatically emits try/finally blocks whenever you use the lock, using, and foreach statements.
  • Call Environment's FailFast method when you want your application to be closed immediately and put a dump with event viewer entry.