Concurrent programming is a terminology that is quite unfamiliar to some people

Concurrent programming is a terminology that is quite unfamiliar to some people. To someone who is not an expert in the computer world, concurrency can be simply described as when a system is doing more than one thing at the same time. However, to software engineers, concurrent programming is more of a complex subject. Programmers will explain concurrency as a process when multiple sequences of operations are strictly executed at the same time.
Of course, concurrency is a natural phenomenon in the real world. At any given time in our daily life, many things are happening concurrently. As software engineers, we should expect to deal with this natural concurrency in order to design appropriate software to control such a system. It would be easy to write separate programs to handle each concurrent activity evolved parallel to each other. However, what if interactions happened between concurrent activities? This calls for many challenges of designing concurrent systems (Sce).
Many systems in our real world are simultaneously happening and they must be addressed by software. In order to do so, many real-time software systems must respond to an externally generated event which may occur at random times, in random order or even both. It has always been a tough case for software engineers to design a conventional procedural program to deal with these situations. Thus, dividing the system into concurrent software elements to deal with each of these events can be simpler for programmers. Concurrency can help enhance the performance of a system by preventing one activity from blocking another, it can also help enhance the controllability as well. Without concurrent components, one function cannot be started, stopped or influenced in mid-stream by other functions. With such ability, concurrency is extremely important in software programming.
Carrying within itself many benefits, concurrent programming can also be supported by multiple threads of control. Despite there are different ways to implement a thread of control by hardware and software, these are the most common mechanisms to manage threads of control: multitasking and multithreading. Multitasking is more of a familiar term to our ears when we first hear it, but it has a different explanation in concurrency. When the operation provides multitasking, a common unit of concurrency is the process. This process is to provide a memory space, a thread of execution, and perhaps some means for sending messages to and receiving them from other processes. Multithreading in many operating systems is to offer a lighter weight alternative to processes, called threads. A single process fits a thread, and all the threads in a process share a sole memory space and other resources controlled by that process (Sce).
Efficiently communicating by threads is extremely enhanced by sharing access to fields and the objects reference fields refer to. Being as efficient as it is, there are still errors. Two possible errors are thread interference and memory consistency errors. These errors can be prevented by a tool called synchronization. According to Wellings from the University of York, when a method is labeled as synchronized, access to the method can only proceed once the system has obtained the lock. Hence, synchronized methods have mutually exclusive access to the data encapsulated by the object, if that data is only accessed by other synchronized methods (Wellings). One of the drawbacks that synchronization has is that it can introduce thread contention, which occurs when more than two threads try to access the same resource instantaneously and cause the runtime to execute more threads more slowly, or even suspend their execution.
The issue mentioned above can be considered as the fundamental issue of concurrent programming. In addition to that, there are also some practical issues which must be explicitly addressed in the design of concurrent software, those are performance tradeoffs and complexity tradeoffs. Performance tradeoffs can be considered when CPU cycles required concurrency to simulate between tasks, which turns out could be spent on the application itself. Complexity tradeoffs in concurrent software include coordination and control mechanisms not needed in the application, which just make things more complex for concurrency (Sce).
In conclusion, the reasons why concurrent programs are used more and more in designing new systems are because they can help increase the efficiency of an application. Concurrency also has more effective use of resources and has fault tolerance as well.