Navigation
This version of the documentation is archived and no longer supported.

Read Concern "available"

New in version 3.6.

A query with read concern “available” returns data from the instance with no guarantee that the data has been written to a majority of the replica set members (i.e. may be rolled back).

Read concern “available” is the default for reads against secondaries if the reads are not associated with causally consistent sessions.

For a sharded cluster, "available" read concern provides greater tolerance for partitions since it does not wait to ensure consistency guarantees. However, a query with "available" read concern may return orphaned documents if the shard is undergoing chunk migrations since the "available" read concern, unlike "local" read concern, does not contact the shard’s primary nor the config servers for updated metadata.

For unsharded collections (including collections in a standalone deployment or a replica set deployment), "local" and "available" read concerns behave identically.

Regardless of the read concern level, the most recent data on a node may not reflect the most recent version of the data in the system.

Causally Consistent Sessions

Read concern available is unavailable for use with causally consistent sessions.

Example

Consider the following timeline of a write operation Write0 to a three member replica set:

Note

For simplification, the example assumes:

  • All writes prior to Write0 have been successfully replicated to all members.
  • Writeprev is the previous write before Write0.
  • No other writes have occured after Write0.
Timeline of a write operation to a three member replica set.
Time Event Most Recent Write Most Recent w: “majority” write
t0 Primary applies Write0
Primary: Write0
Secondary1: Writeprev
Secondary2: Writeprev
Primary: Writeprev
Secondary1: Writeprev
Secondary2: Writeprev
t1 Secondary1 applies write0
Primary: Write0
Secondary1: Write0
Secondary2: Writeprev
Primary: Writeprev
Secondary1: Writeprev
Secondary2: Writeprev
t2 Secondary2 applies write0
Primary: Write0
Secondary1: Write0
Secondary2: Write0
Primary: Writeprev
Secondary1: Writeprev
Secondary2: Writeprev
t3 Primary is aware of successful replication to Secondary1 and sends acknowledgement to client
Primary: Write0
Secondary1: Write0
Secondary2: Write0
Primary: Write0
Secondary1: Writeprev
Secondary2: Writeprev
t4 Primary is aware of successful replication to Secondary2
Primary: Write0
Secondary1: Write0
Secondary2: Write0
Primary: Write0
Secondary1: Writeprev
Secondary2: Writeprev
t5 Secondary1 receives notice (through regular replication mechanism) to update its snapshot of its most recent w: “majority” write
Primary: Write0
Secondary1: Write0
Secondary2: Write0
Primary: Write0
Secondary1: Write0
Secondary2: Writeprev
t6 Secondary2 receives notice (through regular replication mechanism) to update its snapshot of its most recent w: “majority” write
Primary: Write0
Secondary1: Write0
Secondary2: Write0
Primary: Write0
Secondary1: Write0
Secondary2: Write0

Then, the following tables summarizes the state of the data that a read operation with "available" read concern would see at time T.

Timeline of a write operation to a three member replica set.
Read Target Time T State of Data
Primary After t0 Data reflects Write0.
Secondary1 Before t1 Data reflects Writeprev
Secondary1 After t1 Data reflects Write0
Secondary2 Before t2 Data reflects Writeprev
Secondary2 After t2 Data reflects Write0