Race Conditions

Race Conditions


  • Two or more processes are reading or writing some shared data and the final result depends on who runs precisely when.

Tanenbaum example (Printer daemon)

  • Process enters name of file in spooler dir
  • Printer daemon checks to see if file need printing
  • Prints and removes names from spooler dir
  • Spooler dir has 0 … infinity
  • Two shared variables, output, point at file to be printed
  • in, points to free slot in spooler dir
  • Proc A and B queue file for printing
    • A reads in, stores slot 7 in variable
    • Clock interrupt occurs
    • B reads in, stores slot 7 in same variable
    • B writes to slot 7, updates in to slot 8
    • A writes to slot 7, erasing what B put there, updates in to slot 8
    • Spooler dir now in sync
    • B never receives output

Golang example (Incrementing a counter)

  • Proc 1 read counter 0
    • Yield thread 0
    • Increment counter 1
  • Proc 2 read counter 0
    • Yield thread 0
    • Increment counter 1
  • Proc 1 write counter 1
  • Proc 2 write counter 1
  • Proc 1 read counter 1
    • Yield thread 1
    • Increment counter 2
  • Proc 2 read counter 1
    • Yield thread 1
    • Increment counter 2


  • go run -race main.go
  • Don’t communicate by sharing memory, share memory by communicating
  • Passing on a channel the data structure or object.

Avoiding Race Conditions

  • No two processes may be simultaneously inside their critical regions.
  • No assumptions may be made about speeds or the number of CPUs.
  • No process running outside its critical region may block any process.
  • No process should have to wait forever to enter its critical region.

Mutal Exclusion

  • While a process is busy updating shared memory no other process will attempt to enter shared memory space.
  • Disabling interrupts (Single CPU)
    • This approach is generally unattractive because it is unwise to give user processes the power to turn off interrupts.
  • Lock variables
    • Test the lock
    • If 0 set to 1
    • Before process can set 0 to 1, another process reads as 0
    • The race now occurs if the second process modifies the lock just after the first process has finished its second check.
  • Busy Waiting
    • It should usually be avoided, since it wastes CPU time
  • Mutual Exclusion Algorithm: G. L. Peterson
    • Each process calls enter_region
    • Wait, if necessary, until safe to enter shared memory region
    • Process calls leave_region after done with shared memory
#define FALSE 	0
#define TRUE 	1
#define N	2 			/ * number of processes * /
int turn;				/ * whose turn is it? * /
int interested[N]; 			/ * all values initially 0 (FALSE) * /

void enter region(int process);		/ * process is 0 or 1 * /

	int other; 			/ * number of the other process * /

	other = 1 − process;		/ * the opposite of process * /
	interested[process] = TRUE;	/ * show that you are interested * /
	turn = process;			/ * set flag * /
	while (turn == process && interested[other] == TRUE) / * null statement * / ;

void leave region(int process)		/ * process: who is leaving * /
	interested[process] = FALSE;	/ * indicate departure from critical region * /

Audio quality seems to have taken a massive hit since last time.

Yeah I was

  • On Linux
  • Whispering
  • Super close to my mic

I’m hoping to work it out this week.

1 Like
while (turn == process && interested[other] == TRUE)

Literally global warming… :wink:


Busy waiting bad :stuck_out_tongue_winking_eye:

who did you call a goy?

1 Like

I didn’t realize the devs actually called it “Go Proverbs”


this is why I hate tech

they treat it as religion

you know I made fun of python about this.



So this is just a toy to generate a bit of airflow through some nostrils of a few programmers.


Know your tools, understand your platform.
Code, profile, optimize. In that order.

I enjoy the fact that you don’t (pretend to) optimize first like so many others claim. <3

premature optimization is like premature ejaculation. it’s never fun for anyone involved, it ruins your time estimates and only soyboi virgin nerds do it.

1 Like

Co-worker: but this means more rows in the DB
Me: have we measured the performance impact of those additional rows?
Co-worker: no, but…
Me: so optimization here is premature.
Manager: but how fast does it go?
Me: see prior, we have not measured performance.
Manager: but how fast does it go?
Me: mutes and plays BF4


chad move

1 Like

Lmao was about to say something similar.

so buldgy… the bullshit is real… the world is melting and we are polishing the deck of the titanic without even pulling it out of the salt water.


The Absolute State of software developers.

1 Like

Actual footage of me trying to explain DB performance non-linearity as it relates to optimization and architecture:


I wasn’t in the industry at the time, but I am curious about all the robust tech from the 70s and 80s being “robust” because of quarterly/annually deploying infrastructure and software. Was “fast delivery” just “dev tools that we ran when a manager wanted a report”?

tbh, I do try to optimize low-hanging fruits, but not architecture usually. Better make it work, and then see if it’s good enough for what we’re doing than to spend tons of time only to be told I need to change something anyways and the optimizations make it harder to change it.

that said in my personal code, I try to architect in such a way it’s decently optimal. I try to find balance between optimization, it working, and simplicity of implementation

1 Like