What is the difference between concurrency and parallelism?
Examples are appreciated.language-agnosticconcurrencyparallel-processing
concurency: multiple execution flows with the potential to share resources
Ex: two threads competing for a I/O port.
paralelism: splitting a problem in multiple similar chunks.
Ex: parsing a big file by running two processes on every half of the file.
Concurrency is when two or more tasks can start, run, and complete in overlapping time periods. It doesn't necessarily mean they'll ever both be running at the same instant. For example, multitasking on a single-core machine.
Parallelism is when tasks literally run at the same time, e.g., on a multicore processor.
Concurrency: A condition that exists when at least two threads are making progress. A more generalized form of parallelism that can include time-slicing as a form of virtual parallelism.
Parallelism: A condition that arises when at least two threads are executing simultaneously.
They solve different problems. Concurrency solves the problem of having scarce CPU resources and many tasks. So, you create threads or independent paths of execution through code in order to share time on the scarce resource. Up until recently, concurrency has dominated the discussion because of CPU availability.
Parallelism solves the problem of finding enough tasks and appropriate tasks (ones that can be split apart correctly) and distributing them over plentiful CPU resources. Parallelism has always been around of course, but it's coming to the forefront because multi-core processors are so cheap.
Concurrency: If two or more problems are solved by a single processor.
Parallelism: If one problem is solved by multiple processors.
Rob usually talks about Go and usually addresses the question of Concurrency vs Parallelism in a visual and intuitive explanation! Here is a short summary:
Task: Let's burn a pile of obsolete language manuals! One at a time!
Concurrency: There are many concurrently decompositions of the task! One example:
Parallelism: The previous configuration occurs in parallel if there are at least 2 gophers working at the same time or not.
To add onto what others have said:
Concurrency is like having a juggler juggle many balls. Regardless of how it seems, the juggler is only catching/throwing one ball per hand at a time. Parallelism is having multiple jugglers juggle balls simultaneously.
Concurrency => When multiple tasks are performed in overlapping time periods with shared resources (potentially maximizing the resources utilization).
Parallel => when single task is divided into multiple simple independent sub-tasks which can be performed simultaneously.
In electronics serial and parallel represent a type of static topology, determining the actual behaviour of the circuit. When there is no concurrency, parallelism is deterministic.
In order to describe dynamic, time-related phenomena, we use the terms sequential and concurrent. For example, a certain outcome may be obtained via a certain sequence of tasks (eg. a recipe). When we are talking with someone, we are producing a sequence of words. However, in reality, many other processes occur in the same moment, and thus, concur to the actual result of a certain action. If a lot of people is talking at the same time, concurrent talks may interfere with our sequence, but the outcomes of this interference are not known in advance. Concurrency introduces indeterminacy.
The serial/parallel and sequential/concurrent characterization are orthogonal. An example of this is in digital communication. In a serial adapter, a digital message is temporally (i.e. sequentially) distributed along the same communication line (eg. one wire). In a parallel adapter, this is divided also on parallel communication lines (eg. many wires), and then reconstructed on the receiving end.
Let us image a game, with 9 children. If we dispose them as a chain, give a message at the first and receive it at the end, we would have a serial communication. More words compose the message, consisting in a sequence of communication unities.
I like ice-cream so much. > X > X > X > X > X > X > X > X > X > ....
This is a sequential process reproduced on a serial infrastructure.
Now, let us image to divide the children in groups of 3. We divide the phrase in three parts, give the first to the child of the line at our left, the second to the center line's child, etc.
I like ice-cream so much. > I like > X > X > X > .... > .... > ice-cream > X > X > X > .... > so much > X > X > X > ....
This is a sequential process reproduced on a parallel infrastructure (still partially serialized although).
In both cases, supposing there is a perfect communication between the children, the result is determined in advance.
If there are other persons that talk to the first child at the same time as you, then we will have concurrent processes. We do no know which process will be considered by the infrastructure, so the final outcome is non-determined in advance.
Think of it as servicing queues where server can only serve the 1st job in a queue.
1 server , 1 job queue (with 5 jobs) -> no concurrency, no parallelism (Only one job is being serviced to completion, the next job in the queue has to wait till the serviced job is done and there is no other server to service it)
1 server, 2 or more different queues (with 5 jobs per queue) -> concurrency (since server is sharing time with all the 1st jobs in queues, equally or weighted) , still no parallelism since at any instant, there is one and only job being serviced.
2 or more servers , one Queue -> parallelism ( 2 jobs done at the same instant) but no concurrency ( server is not sharing time, the 3rd job has to wait till one of the server completes.)
2 or more servers, 2 or more different queues -> concurrency and parallelism
In other words, concurrency is sharing time to complete a job, it MAY take up the same time to complete its job but at least it gets started early. Important thing is , jobs can be sliced into smaller jobs, which allows interleaving.
Parallelism is achieved with just more CPUs , servers, people etc that run in parallel.
Keep in mind, if the resources are shared, pure parallelism cannot be achieved, but this is where concurrency would have it's best practical use, taking up another job that doesn't need that resource.
Great, let me take an scenario to show what I understand. suppose there're 3 kids named: A, B, C. A and B talk, C listen. For A and B, they are parallel: A: I am A. B: I am B.
But for C, his brain must take the concurrent process to listen A and B, it maybe: I am I A am B.
I'm going to offer an answer that conflicts a bit with some of the popular answers here. In my opinion, concurrency is a general term that includes parallelism. Concurrency applies to any situation where distinct tasks or units of work overlap in time. Parallelism applies more specifically to situations where distinct units of work are evaluated/executed at the same physical time. The raison d'etre of parallelism is speeding up software that can benefit from multiple physical compute resources. The other major concept that fits under concurrency is interactivity. Interactivity applies when the overlapping of tasks is observable from the outside world. The raison d'etre of interactivity is making software that is responsive to real-world entities like users, network peers, hardware peripherals, etc.
Parallelism and interactivity are almost entirely independent dimension of concurrency. For a particular project developers might care about either, both or neither. They tend to get conflated, not least because the abomination that is threads gives a reasonably convenient primitive to do both.
A little more detail about parallelism:
Parallelism exists at very small scales (e.g. instruction-level parallelism in processors), medium scales (e.g. multicore processors) and large scales (e.g. high-performance computing clusters). Pressure on software developers to expose more thread-level parallelism has increased in recent years, because of the growth of multicore processors. Parallelism is intimately connected to the notion of dependence. Dependences limit the extent to which parallelism can be achieved; two tasks cannot be executed in parallel if one depends on the other (Ignoring speculation).
There are lots of patterns and frameworks that programmers use to express parallelism: pipelines, task pools, aggregate operations on data structures ("parallel arrays").
A little more detail about interactivity:
The most basic and common way to do interactivity is with events (i.e. an event loop and handlers/callbacks). For simple tasks events are great. Trying to do more complex tasks with events gets into stack ripping (a.k.a. callback hell; a.k.a. control inversion). When you get fed up with events you can try more exotic things like generators, coroutines (a.k.a. Async/Await), or cooperative threads.
For the love of reliable software, please don't use threads if what you're going for is interactivity.
I dislike Rob Pike's "concurrency is not parallelism; it's better" slogan. Concurrency is neither better nor worse than parallelism. Concurrency includes interactivity which cannot be compared in a better/worse sort of way with parallelism. It's like saying "control flow is better than data".
Confusion exists because dictionary meanings of both these words are almost the same:
Yet the way they are used in computer science and programming are quite different. Here is my interpretation:
So what do I mean by above definitions?
I will clarify with a real world analogy. Let’s say you have to get done 2 very important tasks in one day:
Now, the problem is that task-1 requires you to go to an extremely bureaucratic government office that makes you wait for 4 hours in a line to get your passport. Meanwhile, task-2 is required by your office, and it is a critical task. Both must be finished on a specific day.
Ordinarily, you will drive to passport office for 2 hours, wait in the line for 4 hours, get the task done, drive back two hours, go home, stay away 5 more hours and get presentation done.
But you’re smart. You plan ahead. You carry a laptop with you, and while waiting in the line, you start working on your presentation. This way, once you get back at home, you just need to work 1 extra hour instead of 5.
In this case, both tasks are done by you, just in pieces. You interrupted the passport task while waiting in the line and worked on presentation. When your number was called, you interrupted presentation task and switched to passport task. The saving in time was essentially possible due to interruptability of both the tasks.
Concurrency, IMO, should be taken as "isolation" in ACID properties of a database. Two database transactions satisfy isolation requirement if you perform sub-transactions in each in any interleaved way and the final result is same as if the two tasks were done serially. Remember, that for both the passport and presentation tasks, you are the sole executioner.
Now, since you are such a smart fella, you’re obviously a higher-up, and you have got an assistant. So, before you leave to start the passport task, you call him and tell him to prepare first draft of the presentation. You spend your entire day and finish passport task, come back and see your mails, and you find the presentation draft. He has done a pretty solid job and with some edits in 2 more hours, you finalize it.
Now since, your assistant is just as smart as you, he was able to work on it independently, without needing to constantly ask you for clarifications. Thus, due to the independentability of the tasks, they were performed at the same time by two different executioners.
Still with me? Alright...
Remember your passport task, where you have to wait in the line? Since it is your passport, your assistant cannot wait in line for you. Thus, the passport task has interruptability (you can stop it while waiting in the line, and resume it later when your number is called), but no independentability (your assistant cannot wait in your stead).
Suppose the government office has a security check to enter the premises. Here, you must remove all electronic devices and submit them to the officers, and they only return your devices after you complete your task.
In this, case, the passport task is neither independentable nor interruptible. Even if you are waiting in the line, you cannot work on something else because you do not have necessary equipment.
Similarly, say the presentation is so highly mathematical in nature that you require 100% concentration for at least 5 hours. You cannot do it while waiting in line for passport task, even if you have your laptop with you.
In this case, the presentation task is independentable (either you or your assistant can put in 5 hours of focused effort), but not interruptible.
Now, say that in addition to assigning your assistant to the presentation, you also carry a laptop with you to passport task. While waiting in the line, you see that your assistant has created the first 10 slides in a shared deck. You send comments on his work with some corrections. Later, when you arrive back home, instead of 2 hours to finalize the draft, you just need 15 minutes.
This was possible because presentation task has independentability (either one of you can do it) and interruptability (you can stop it and resume it later). So you concurrently executed both tasks, and executed the presentation task in parallel.
Let’s say that, in addition to being overly bureaucratic, the government office is corrupt. Thus, you can show your identification, enter it, start waiting in line for your number to be called, bribe a guard and someone else to hold your position in the line, sneak out, come back before your number is called, and resume waiting yourself.
In this case, you can perform both the passport and presentation tasks concurrently and in parallel. You can sneak out, and your position is held by your assistant. Both of you can then work on the presentation, etc.
In computing world, here are example scenarios typical of each of these cases:
If you see why Rob Pike is saying concurrency is better, you have to understand that the reason is. You have a really long task in which there are multiple waiting periods where you wait for some external operations like file read, network download. In his lecture, all he is saying is, “just break up this long sequential task so that you can do something useful while you wait.” That is why he talks about different organizations with various gophers.
Now the strength of Go comes from making this breaking really easy with
go keyword and channels. Also, there is excellent underlying support in the runtime to schedule these goroutines.
But essentially, is concurrency better that parallelism?
Are apples preferred more than oranges?
Concurrency simple means more than one tasks are running (not necessary in parallel). For example assumer we have 3 tasks then at any moment of time: more than one may be running or all may be running at same time.
Parallelism mean they are literally running in parallel. So in that case all three must be running at same time.
Concurrent is: "Two queues accessing one ATM machine"
Parallel is: "Two queues and two ATM machines"
I will try to explain with a interesting and easy to understand example. :)
Assume that a organization organizes a chess tournament where 10 players (with equal chess playing skills) will challenge a professional champion chess player. And since chess is 1:1 game thus organizers have to conduct 10 games in time efficient manner so that they can finish the whole event as quickly as possible.
Hopefully following scenarios will easily describe multiple ways of conducting these 10 games:
1) SERIAL - lets say that the professional plays with each person one by one i.e. starts and finishes the game with one person and then starts the next game with next person and so on. In other words, they decided to conduct the games sequentially. So if one game takes 10 mins to complete then 10 games will take 100 mins, also assume that transition from one game to other takes 6 secs then for 10 games it will be 54 secs (approx. 1 min).
so the whole event will approximately complete in 101 mins (WORST APPROACH)
2) CONCURRENT - lets say that professional plays his turn and moves on to next player so all 10 players are playing simultaneously but the professional player is not with two person at a time, he plays his turn and moves on to next person. Now assume professional player takes 6 sec to play his turn and also transition time of professional player b/w two players is 6 sec so total transition time to get back to first player will be 1min (10x6sec). Therefore, by the time he is back to first person with, whom event was started, 2mins have passed (10xtime_per_turn_by_champion + 10xtransition_time=2mins)
Assuming that all player take 45sec to complete their turn so based on 10mins per game from SERIAL event the no. of rounds before a game finishes should 600/(45+6) = 11 rounds (approx)
So the whole event will approximately complete in 11xtime_per_turn_by_player_&_champion + 11xtransition_time_across_10_players = 11x51 + 11x60sec= 561 + 660 = 1221sec = 20.35mins (approximately)
SEE THE IMPROVEMENT from 101 mins to 20.35 mins (BETTER APPROACH)
3) PARALLEL - lets say organizers get some extra funds and thus decided to invite two professional champion player (both equally capable) and divided the set of same 10 players (challengers) in two group of 5 each and assigned them to two champion i.e. one group each. Now the event is progressing in parallel in these two sets i.e. at least two players (one in each group) are playing against the two professional players in their respective group.
However within the group the professional player with take one player at a time (i.e. sequentially) so without any calculation you can easily deduce that whole event will approximately complete in 101/2=50.5mins to complete
SEE THE IMPROVEMENT from 101 mins to 50.5 mins (GOOD APPROACH)
4) CONCURRENT + PARALLEL - In above scenario, lets say that the two champion player will play concurrently (read 2nd point) with the 5 players in their respective groups so now games across groups are running in parallel but within group they are running concurrently.
So the games in one group will approximately complete in 11xtime_per_turn_by_player_&_champion + 11xtransition_time_across_5_players = 11x51 + 11x30 = 600 + 330 = 930sec = 15.5mins (approximately)
So the whole event (involving two such parallel running group) will approximately complete in 15.5mins
SEE THE IMPROVEMENT from 101 mins to 15.5 mins (BEST APPROACH)
NOTE: in above scenario if you replace 10 players with 10 similar jobs and two professional player with a two CPU cores then again the following ordering will remain true:
SERIAL > PARALLEL > CONCURRENT > CONCURRENT+PARALLEL
(NOTE: this order might change for other scenarios as this ordering highly depends on inter-dependency of jobs, communication needs b/w jobs and transition overhead b/w jobs)
Say you have a program that has two threads. The program can run in two ways:
Concurrency Concurrency + parallelism (Single-Core CPU) (Multi-Core CPU) ___ ___ ___ |th1| |th1|th2| | | | |___| |___|___ | |___ |th2| |___|th2| ___|___| ___|___| |th1| |th1| |___|___ | |___ |th2| | |th2|
In both cases we have concurrency from the mere fact that we have more than one thread running.
If we ran this program on a computer with a single CPU core, the OS would be switching between the two threads, allowing one thread to run at a time.
If we ran this program on a computer with a multi-core CPU then we would be able to run the two threads in parallel - side by side at the exact same time.
Concurrency is the generalized form of parallelism. For example parallel program can also be called concurrent but reverse is not true.
Concurrent execution is possible on single processor (multiple threads, managed by scheduler or thread-pool)
Parallel execution is not possible on single processor but on multiple processors. (One process per processor)
Distributed computing is also a related topic and it can also be called concurrent computing but reverse is not true, like parallelism.
For details read this research paper Concepts of Concurrent Programming
Pike's notion of "concurrency" is an intentional design and implementation decision. A concurrent-capable program design may or may not exhibit behavioral "parallelism"; it depends upon the runtime environment.
You don't want parallelism exhibited by a program that wasn't designed for concurrency. :-) But to the extent that it's a net gain for the relevant factors (power consumption, performance, etc.), you want a maximally-concurrent design so that the host system can parallelize its execution when possible.
Pike's Go programming language illustrates this in the extreme: his functions are all threads that can run correctly concurrently, i.e. calling a function always creates a thread that will run in parallel with the caller if the system is capable of it. An application with hundreds or even thousands of threads is perfectly ordinary in his world. (I'm no Go expert, that's just my take on it.)
Just by consulting the dictionary, you can see that concurrent (from latin) means to run together, converge, agree; ergo there is a need to synchronize because there is competition on the same resources. Parallel (from greek) means to duplicate on the side; ergo to do the same thing at the same time.
I really like Paul Butcher's answer to this question (he's the writer of Seven Concurrency Models in Seven Weeks):
Although they’re often confused, parallelism and concurrency are different things. Concurrency is an aspect of the problem domain—your code needs to handle multiple simultaneous (or near simultaneous) events. Parallelism, by contrast, is an aspect of the solution domain—you want to make your program run faster by processing different portions of the problem in parallel. Some approaches are applicable to concurrency, some to parallelism, and some to both. Understand which you’re faced with and choose the right tool for the job.
Concurrency can involve tasks run simultaneously or not (they can indeed be run in separate processors/cores but they can as well be run in "ticks"). What is important is that concurrency always refer to doing a piece of one greater task. So basically it's a part of some computations. You have to be smart about what you can do simultaneously and what not to and how to synchronize.
Parallelism means that you're just doing some things simultaneously. They don't need to be a part of solving one problem. Your threads can, for instance, solve a single problem each. Of course synchronization stuff also applies but from different perspective.
Imagine learning a new programming language by watching a video tutorial. You need to pause the video, apply what been said in code then continue watching. That's concurrency.
Now you're a professional programmer. And you enjoy listening to calm music while coding. That's Parallelism.
Parallelism: Having multiple threads do similar task which are independent of each other in terms of data and resource that they require to do so. Eg: Google crawler can spawn thousands of threads and each thread can do it's task independently.
Concurrency: Concurrency comes into picture when you have shared data, shared resource among the threads. In a transactional system this means you have to synchronize the critical section of the code using some techniques like Locks, semaphores, etc.
Concurrent programming regards operations that appear to overlap and is primarily concerned with the complexity that arises due to non-deterministic control flow. The quantitative costs associated with concurrent programs are typically both throughput and latency. Concurrent programs are often IO bound but not always, e.g. concurrent garbage collectors are entirely on-CPU. The pedagogical example of a concurrent program is a web crawler. This program initiates requests for web pages and accepts the responses concurrently as the results of the downloads become available, accumulating a set of pages that have already been visited. Control flow is non-deterministic because the responses are not necessarily received in the same order each time the program is run. This characteristic can make it very hard to debug concurrent programs. Some applications are fundamentally concurrent, e.g. web servers must handle client connections concurrently. Erlang is perhaps the most promising upcoming language for highly concurrent programming.
Parallel programming concerns operations that are overlapped for the specific goal of improving throughput. The difficulties of concurrent programming are evaded by making control flow deterministic. Typically, programs spawn sets of child tasks that run in parallel and the parent task only continues once every subtask has finished. This makes parallel programs much easier to debug. The hard part of parallel programming is performance optimization with respect to issues such as granularity and communication. The latter is still an issue in the context of multicores because there is a considerable cost associated with transferring data from one cache to another. Dense matrix-matrix multiply is a pedagogical example of parallel programming and it can be solved efficiently by using Straasen's divide-and-conquer algorithm and attacking the sub-problems in parallel. Cilk is perhaps the most promising language for high-performance parallel programming on shared-memory computers (including multicores).
Copied from my answer: https://stackoverflow.com/a/3982782
Explanation from this source was helpful for me:
Concurrency is related to how an application handles multiple tasks it works on. An application may process one task at at time (sequentially) or work on multiple tasks at the same time (concurrently).
Parallelism on the other hand, is related to how an application handles each individual task. An application may process the task serially from start to end, or split the task up into subtasks which can be completed in parallel.
As you can see, an application can be concurrent, but not parallel. This means that it processes more than one task at the same time, but the tasks are not broken down into subtasks.
An application can also be parallel but not concurrent. This means that the application only works on one task at a time, and this task is broken down into subtasks which can be processed in parallel.
Additionally, an application can be neither concurrent nor parallel. This means that it works on only one task at a time, and the task is never broken down into subtasks for parallel execution.
Finally, an application can also be both concurrent and parallel, in that it both works on multiple tasks at the same time, and also breaks each task down into subtasks for parallel execution. However, some of the benefits of concurrency and parallelism may be lost in this scenario, as the CPUs in the computer are already kept reasonably busy with either concurrency or parallelism alone. Combining it may lead to only a small performance gain or even performance loss.
"Concurrency" is when there are multiple things in progress.
"Parallelism" is when concurrent things are progressing at the same time.
Examples of concurrency without parallelism:
SqlDataReaders on a MARS connection.
Note, however, that the difference between concurrency and parallelism is often a matter of perspective. The above examples are non-parallel from the perspective of (observable effects of) executing your code. But there is instruction-level parallelism even within a single core. There are pieces of hardware doing things in parallel with CPU and then interrupting the CPU when done. GPU could be drawing to screen while you window procedure or event handler is being executed. The DBMS could be traversing B-Trees for the next query while you are still fetching the results of the previous one. Browser could be doing layout or networking while your
Promise.resolve() is being executed. Etc, etc...
So there you go. The world is as messy as always ;)
The simplest and most elegant way of understanding the two in my opinion is this. Concurrency allows interleaving of execution and so can give the illusion of parallelism. This means that a concurrent system can run your Youtube video alongside you writing up a document in Word, for example. The underlying OS, being a concurrent system, enables those tasks to interleave their execution. Because computers execute instructions so quickly, this gives the appearance of doing two things at once.
Parallelism is when such things really are in parallel. In the example above, you might find the video processing code is being executed on a single core, and the Word application is running on another. Note that this means that a concurrent program can also be in parallel! Structuring your application with threads and processes enables your program to exploit the underlying hardware and potentially be done in parallel.
Why not have everything be parallel then? One reason is because concurrency is a way of structuring programs and is a design decision to facilitate separation of concerns, whereas parallelism is often used in the name of performance. Another is that some things fundamentally cannot fully be done in parallel. An example of this would be adding two things to the back of a queue - you cannot insert both at the same time. Something must go first and the other behind it, or else you mess up the queue. Although we can interleave such execution (and so we get a concurrent queue), you cannot have it parallel.
Hope this helps!
I really liked this graphical representation from another answer - I think it answers the question much better than a lot of the above answers
Parallelism vs Concurrency When two threads are running in parallel, they are both running at the same time. For example, if we have two threads, A and B, then their parallel execution would look like this:
CPU 1: A ------------------------->
CPU 2: B ------------------------->
When two threads are running concurrently, their execution overlaps. Overlapping can happen in one of two ways: either the threads are executing at the same time (i.e. in parallel, as above), or their executions are being interleaved on the processor, like so:
CPU 1: A -----------> B ----------> A -----------> B ---------->
So, for our purposes, parallelism can be thought of as a special case of concurrency
Source: Another answer here
Hope that helps.
(I'm quite surprised such a fundamental question is not resolved correctly and neatly for years...)
In short, both concurrency and parallelism are properties of computing.
As of the difference, here is the explanation from Robert Harper:
The first thing to understand is parallelism has nothing to do with concurrency. Concurrency is concerned with nondeterministic composition of programs (or their components). Parallelism is concerned with asymptotic efficiency of programs with deterministic behavior. Concurrency is all about managing the unmanageable: events arrive for reasons beyond our control, and we must respond to them. A user clicks a mouse, the window manager must respond, even though the display is demanding attention. Such situations are inherently nondeterministic, but we also employ pro forma nondeterminism in a deterministic setting by pretending that components signal events in an arbitrary order, and that we must respond to them as they arise. Nondeterministic composition is a powerful program structuring idea. Parallelism, on the other hand, is all about dependencies among the subcomputations of a deterministic computation. The result is not in doubt, but there are many means of achieving it, some more efficient than others. We wish to exploit those opportunities to our advantage.
They can be sorts of orthogonal properties in programs. Read this blog post for additional illustrations. And this one discussed slightly more on difference about components in programming, like threads.
Note that threading or multitasking are all implementations of computing serving more concrete purposes. They can be related to parallelism and concurrency, but not in an essential way. Thus they are hardly good entries to start the explanation.
One more highlight: (physical) "time" has almost nothing to do with the properties discussed here. Time is just a way of implementation of the measurement to show the significance of the properties, but far from the essence. Think twice the role of "time" in time complexity - which is more or less similar, even the measurement is often more significant in that case.
Parallelism is simultaneous execution of processes on a
multiple cores per CPU or
multiple CPUs (on a single motherboard).
Concurrency is when Parallelism is achieved on a
single core CPU by using scheduling algorithms that divides the CPU’s time (time-slice). Processes are interleaved.
thread(s)+allocated memory by OS(heap, registers, stack, class memory)