tf.random.Generator  |  TensorFlow v2.11.0 (2023)

tf.random.Generator | TensorFlow v2.11.0 (1) View source on GitHub

Random-number generator.

View aliases

Main aliases

tf.random.experimental.Generator

Compat aliases for migration

SeeMigration guide formore details.

tf.compat.v1.random.Generator, tf.compat.v1.random.experimental.Generator

tf.random.Generator( copy_from=None, state=None, alg=None)

Used in the notebooks

Used in the guideUsed in the tutorials
  • Random number generation
  • Data augmentation
  • Random noise generation in TFF

Example:

Creating a generator from a seed:

g = tf.random.Generator.from_seed(1234)g.normal(shape=(2, 3))<tf.Tensor: shape=(2, 3), dtype=float32, numpy=array([[ 0.9356609 , 1.0854305 , -0.93788373], [-0.5061547 , 1.3169702 , 0.7137579 ]], dtype=float32)>

Creating a generator from a non-deterministic state:

g = tf.random.Generator.from_non_deterministic_state()g.normal(shape=(2, 3))<tf.Tensor: shape=(2, 3), dtype=float32, numpy=...>

All the constructors allow explicitly choosing an Random-Number-Generation(RNG) algorithm. Supported algorithms are "philox" and "threefry". Forexample:

g = tf.random.Generator.from_seed(123, alg="philox")g.normal(shape=(2, 3))<tf.Tensor: shape=(2, 3), dtype=float32, numpy=array([[ 0.8673864 , -0.29899067, -0.9310337 ], [-1.5828488 , 1.2481191 , -0.6770643 ]], dtype=float32)>

CPU, GPU and TPU with the same algorithm and seed will generate the sameinteger random numbers. Float-point results (such as the output of normal)may have small numerical discrepancies between different devices.

This class uses a tf.Variable to manage its internal state. Every timerandom numbers are generated, the state of the generator will change. Forexample:

g = tf.random.Generator.from_seed(1234)g.state<tf.Variable ... numpy=array([1234, 0, 0])>g.normal(shape=(2, 3))<...>g.state<tf.Variable ... numpy=array([2770, 0, 0])>

The shape of the state is algorithm-specific.

There is also a global generator:

g = tf.random.get_global_generator()g.normal(shape=(2, 3))<tf.Tensor: shape=(2, 3), dtype=float32, numpy=...>

When creating a generator inside a tf.distribute.Strategy scope, eachreplica will get a different stream of random numbers.

For example, in this code:

strat = tf.distribute.MirroredStrategy(devices=["cpu:0", "cpu:1"])with strat.scope(): g = tf.random.Generator.from_seed(1) def f(): return g.normal([]) results = strat.run(f).values

results[0] and results[1] will have different values.

If the generator is seeded (e.g. created via Generator.from_seed), therandom numbers will be determined by the seed, even though different replicasget different numbers. One can think of a random number generated on areplica as a hash of the replica ID and a "master" random number that may becommon to all replicas. Hence, the whole system is still deterministic.

(Note that the random numbers on different replicas are not correlated, evenif they are deterministically determined by the same seed. They are notcorrelated in the sense that no matter what statistics one calculates on them,there won't be any discernable correlation.)

Generators can be freely saved and restored using tf.train.Checkpoint. Thecheckpoint can be restored in a distribution strategy with a different numberof replicas than the original strategy. If a replica ID is present in both theoriginal and the new distribution strategy, its state will be properlyrestored (i.e. the random-number stream from the restored point will be thesame as that from the saving point) unless the replicas have already divergedin their RNG call traces before saving (e.g. one replica has made one RNG callwhile another has made two RNG calls). We don't have such guarantee if thegenerator is saved in a strategy scope and restored outside of any strategyscope, or vice versa.

When a generator is created within the scope oftf.distribute.experimental.ParameterServerStrategy, the workerswill share the generator's state (placed on one of the parameterservers). In this way the workers will still get differentrandom-number streams, as stated above. (This is similar to replicasin a tf.distribute.MirroredStrategy sequentially accessing agenerator created outside the strategy.) Each RNG call on a workerwill incur a round-trip to a parameter server, which may haveperformance impacts. When creating atf.distribute.experimental.ParameterServerStrategy, please makesure that the variable_partitioner argument won't shard smallvariables of shape [2] or [3] (because generator states must notbe sharded). Ways to avoid sharding small variables include settingvariable_partitioner to None or totf.distribute.experimental.partitioners.MinSizePartitioner with alarge enough min_shard_bytes (seetf.distribute.experimental.ParameterServerStrategy's documentationfor more details).

Args

copy_froma generator to be copied from.
statea vector of dtype STATE_TYPE representing the initial state of theRNG, whose length and semantics are algorithm-specific. If it's avariable, the generator will reuse it instead of creating a newvariable.
algthe RNG algorithm. Possible values aretf.random.Algorithm.PHILOX for the Philox algorithm andtf.random.Algorithm.THREEFRY for the ThreeFry algorithm(see paper 'Parallel Random Numbers: As Easy as 1, 2, 3'[https://www.thesalmons.org/john/random123/papers/random123sc11.pdf]).The string names "philox" and "threefry" can also be used.Note PHILOX guarantees the same numbers are produced (giventhe same random state) across all architectures (CPU, GPU, XLA etc).

Attributes

algorithmThe RNG algorithm id (a Python integer or scalar integer Tensor).
keyThe 'key' part of the state of a counter-based RNG.

For a counter-base RNG algorithm such as Philox and ThreeFry (asdescribed in paper 'Parallel Random Numbers: As Easy as 1, 2, 3'[https://www.thesalmons.org/john/random123/papers/random123sc11.pdf]),the RNG state consists of two parts: counter and key. The output isgenerated via the formula: output=hash(key, counter), i.e. a hashing ofthe counter parametrized by the key. Two RNGs with two different keys canbe thought as generating two independent random-number streams (a streamis formed by increasing the counter).

stateThe internal state of the RNG.

Methods

binomial

View source

binomial( shape, counts, probs, dtype=tf.dtypes.int32, name=None)

Outputs random values from a binomial distribution.

The generated values follow a binomial distribution with specified count andprobability of success parameters.

Example:

counts = [10., 20.]# Probability of success.probs = [0.8]rng = tf.random.Generator.from_seed(seed=234)binomial_samples = rng.binomial(shape=[2], counts=counts, probs=probs)counts = ... # Shape [3, 1, 2]probs = ... # Shape [1, 4, 2]shape = [3, 4, 3, 4, 2]rng = tf.random.Generator.from_seed(seed=1717)# Sample shape will be [3, 4, 3, 4, 2]binomial_samples = rng.binomial(shape=shape, counts=counts, probs=probs)
Args
shapeA 1-D integer Tensor or Python array. The shape of the outputtensor.
countsTensor. The counts of the binomial distribution. Must bebroadcastable with probs, and broadcastable with the rightmostdimensions of shape.
probsTensor. The probability of success for thebinomial distribution. Must be broadcastable with counts andbroadcastable with the rightmost dimensions of shape.
dtypeThe type of the output. Default: tf.int32
nameA name for the operation (optional).
Returns
samplesA Tensor of the specified shape filled with random binomialvalues. For each i, each samples[i, ...] is an independent draw fromthe binomial distribution on counts[i] trials with probability ofsuccess probs[i].

from_key_counter

View source

@classmethodfrom_key_counter( key, counter, alg)

Creates a generator from a key and a counter.

This constructor only applies if the algorithm is a counter-based algorithm.See method key for the meaning of "key" and "counter".

Args
keythe key for the RNG, a scalar of type STATE_TYPE.
countera vector of dtype STATE_TYPE representing the initial counter forthe RNG, whose length is algorithm-specific.,
algthe RNG algorithm. If None, it will be auto-selected. See__init__ for its possible values.
Returns
The new generator.

from_non_deterministic_state

View source

@classmethodfrom_non_deterministic_state( alg=None)

Creates a generator by non-deterministically initializing its state.

The source of the non-determinism will be platform- and time-dependent.

Args
alg(optional) the RNG algorithm. If None, it will be auto-selected. See__init__ for its possible values.
Returns
The new generator.

from_seed

View source

@classmethodfrom_seed( seed, alg=None)

Creates a generator from a seed.

A seed is a 1024-bit unsigned integer represented either as a Pythoninteger or a vector of integers. Seeds shorter than 1024-bit will bepadded. The padding, the internal structure of a seed and the way a seedis converted to a state are all opaque (unspecified). The only semanticsspecification of seeds is that two different seeds are likely to producetwo independent generators (but no guarantee).

Args
seedthe seed for the RNG.
alg(optional) the RNG algorithm. If None, it will be auto-selected. See__init__ for its possible values.
Returns
The new generator.

from_state

View source

@classmethodfrom_state( state, alg)

Creates a generator from a state.

See __init__ for description of state and alg.

Args
statethe new state.
algthe RNG algorithm.
Returns
The new generator.

make_seeds

View source

make_seeds( count=1)

Generates seeds for stateless random ops.

For example:

seeds = get_global_generator().make_seeds(count=10)for i in range(10): seed = seeds[:, i] numbers = stateless_random_normal(shape=[2, 3], seed=seed) ...
Args
countthe number of seed pairs (note that stateless random ops need apair of seeds to invoke).
Returns
A tensor of shape [2, count] and dtype int64.

normal

View source

normal( shape, mean=0.0, stddev=1.0, dtype=tf.dtypes.float32, name=None)

Outputs random values from a normal distribution.

Args
shapeA 1-D integer Tensor or Python array. The shape of the outputtensor.
meanA 0-D Tensor or Python value of type dtype. The mean of the normaldistribution.
stddevA 0-D Tensor or Python value of type dtype. The standarddeviation of the normal distribution.
dtypeThe type of the output.
nameA name for the operation (optional).
Returns
A tensor of the specified shape filled with random normal values.

reset

View source

reset( state)

Resets the generator by a new state.

See __init__ for the meaning of "state".

Args
statethe new state.

reset_from_key_counter

View source

reset_from_key_counter( key, counter)

Resets the generator by a new key-counter pair.

See from_key_counter for the meaning of "key" and "counter".

Args
keythe new key.
counterthe new counter.

reset_from_seed

View source

reset_from_seed( seed)

Resets the generator by a new seed.

See from_seed for the meaning of "seed".

Args
seedthe new seed.

skip

View source

skip( delta)

Advance the counter of a counter-based RNG.

Args
deltathe amount of advancement. The state of the RNG afterskip(n) will be the same as that after normal([n])(or any other distribution). The actual increment added to thecounter is an unspecified implementation detail.
Returns
A Tensor of type int64.

split

View source

split( count=1)

Returns a list of independent Generator objects.

Two generators are independent of each other in the sense that therandom-number streams they generate don't have statistically detectablecorrelations. The new generators are also independent of the old one.The old generator's state will be changed (like other random-numbergenerating methods), so two calls of split will return differentnew generators.

For example:

gens = get_global_generator().split(count=10)for gen in gens: numbers = gen.normal(shape=[2, 3]) # ...gens2 = get_global_generator().split(count=10)# gens2 will be different from gens

The new generators will be put on the current device (possible differentfrom the old generator's), for example:

with tf.device("/device:CPU:0"): gen = Generator(seed=1234) # gen is on CPUwith tf.device("/device:GPU:0"): gens = gen.split(count=10) # gens are on GPU
Args
countthe number of generators to return.
Returns
A list (length count) of Generator objects independent of each other.The new generators have the same RNG algorithm as the old one.

truncated_normal

View source

truncated_normal( shape, mean=0.0, stddev=1.0, dtype=tf.dtypes.float32, name=None)

Outputs random values from a truncated normal distribution.

The generated values follow a normal distribution with specified mean andstandard deviation, except that values whose magnitude is more than2 standard deviations from the mean are dropped and re-picked.

Args
shapeA 1-D integer Tensor or Python array. The shape of the outputtensor.
meanA 0-D Tensor or Python value of type dtype. The mean of thetruncated normal distribution.
stddevA 0-D Tensor or Python value of type dtype. The standarddeviation of the normal distribution, before truncation.
dtypeThe type of the output.
nameA name for the operation (optional).
Returns
A tensor of the specified shape filled with random truncated normalvalues.

uniform

View source

uniform( shape, minval=0, maxval=None, dtype=tf.dtypes.float32, name=None)

Outputs random values from a uniform distribution.

The generated values follow a uniform distribution in the range[minval, maxval). The lower bound minval is included in the range, whilethe upper bound maxval is excluded. (For float numbers especiallylow-precision types like bfloat16, because ofrounding, the result may sometimes include maxval.)

For floats, the default range is [0, 1). For ints, at least maxval mustbe specified explicitly.

In the integer case, the random integers are slightly biased unlessmaxval - minval is an exact power of two. The bias is small for values ofmaxval - minval significantly smaller than the range of the output (either2**32 or 2**64).

For full-range random integers, pass minval=None and maxval=None with aninteger dtype (for integer dtypes, minval and maxval must be bothNone or both not None).

Args
shapeA 1-D integer Tensor or Python array. The shape of the outputtensor.
minvalA Tensor or Python value of type dtype, broadcastable withshape (for integer types, broadcasting is not supported, so it needsto be a scalar). The lower bound (included) on the range of randomvalues to generate. Pass None for full-range integers. Defaults to 0.
maxvalA Tensor or Python value of type dtype, broadcastable withshape (for integer types, broadcasting is not supported, so it needsto be a scalar). The upper bound (excluded) on the range of randomvalues to generate. Pass None for full-range integers. Defaults to 1if dtype is floating point.
dtypeThe type of the output.
nameA name for the operation (optional).
Returns
A tensor of the specified shape filled with random uniform values.
Raises
ValueErrorIf dtype is integral and maxval is not specified.

uniform_full_int

View source

uniform_full_int( shape, dtype=tf.dtypes.uint64, name=None)

Uniform distribution on an integer type's entire range.

This method is the same as setting minval and maxval to None in theuniform method.

Args
shapethe shape of the output.
dtype(optional) the integer type, default to uint64.
name(optional) the name of the node.
Returns
A tensor of random numbers of the required shape.
Top Articles
Latest Posts
Article information

Author: Msgr. Benton Quitzon

Last Updated: 01/03/2023

Views: 5972

Rating: 4.2 / 5 (63 voted)

Reviews: 86% of readers found this page helpful

Author information

Name: Msgr. Benton Quitzon

Birthday: 2001-08-13

Address: 96487 Kris Cliff, Teresiafurt, WI 95201

Phone: +9418513585781

Job: Senior Designer

Hobby: Calligraphy, Rowing, Vacation, Geocaching, Web surfing, Electronics, Electronics

Introduction: My name is Msgr. Benton Quitzon, I am a comfortable, charming, thankful, happy, adventurous, handsome, precious person who loves writing and wants to share my knowledge and understanding with you.