Training While Loop In Tensorflow

- 1 answer

I've attempted converting a Python-side training loop to Tensorflow to (hypothetically) make the code run faster - not having to pass control over to cpu constantly. However, I can't manage using tf.while_loop.

Here's the code that works:

import numpy as np
import tensorflow as tf

from tqdm import tqdm
from sklearn.datasets import load_iris
from sklearn.preprocessing import RobustScaler

x, y = load_iris(True)
x = RobustScaler().fit_transform(x)

shape = (10, 10)
max_epochs = 1000

graph = tf.Graph()
sess = tf.Session(graph=graph)

x = x.astype(np.float64)

# Construct graph
with graph.as_default():
    weights = tf.get_variable(
        'weights', shape, initializer=tf.constant_initializer, dtype=tf.float64
    curr_epoch = tf.placeholder(dtype=tf.int64, shape=())

    with tf.name_scope('data'):
        data =
        data = data.shuffle(buffer_size=10000)
        data = data.repeat(max_epochs)
        data = data.batch(1)
        data = data.make_one_shot_iterator().get_next()

    with tf.name_scope('update'):
        update_op = make_update_op(weights)

    init = tf.global_variables_initializer()

for i in tqdm(range(max_epochs)):
    for _ in range(x.shape[0]):, feed_dict={
            curr_epoch: i

np_weights =
print(np_weights) # Correctly prints an array of 150's.

Now, if I create an update function to pass tf.while_loop, an error is thrown.

def make_update_op(w):
    return w.assign(
        w + 0.001

# In the code above:
update_op = tf.while_loop(lambda _: True, make_update_op, (weights,), maximum_iterations=x.shape[0])

# No inner loop:
for i in tqdm(range(max_epochs)):, feed_dict={
        curr_epoch: i

Line 22, in make_update_op return w.assign( AttributeError: 'Tensor' object has no attribute 'assign'

I don't quite understand what is happening even after reading the documentation. weights is a Variable after all. What could be done to correctly make the training loop?



Turns out, all that was missing was the fact that one cannot assign to a variable inside a loop as Vlad pointed out. Instead, one can return the new value of a variable.

def make_update_op(w):
    return w + 0.001

new_w = tf.while_loop(lambda _: True, make_update_op, (weights,), maximum_iterations=x.shape[0])
update_op = weights.assign(new_w)

To use more variables one would need to return the same amount from the function and unpack them in Python, but the principle is the same.

def make_update_op(w, d):
    return w + 0.001, d

new_w, _ = tf.while_loop(lambda *_: True, make_update_op, (weights, data), maximum_iterations=x.shape[0])
update_op = weights.assign(new_w)