# 如果你想使你的程序运行在不同的设备上 # tf.device函数提供了一个方便的方法来实现 # 所有在特定上下文中的操作都放置在相同的设备上面 # A device specification has the following form: # /job:<JOB_NAME>/task:<TASK_INDEX>/device:<DEVICE_TYPE>:<DEVICE_INDEX> # <JOB_NAME> 是一个字母数字字符串,并且不以数字开头 # <DEVICE_TYPE> GPU or CPU # <TASK_INDEX>是一个非负值,表示在一个job中的第几个task # <DEVICE_INDEX> 表示第几个设备 # operation created outside either context will run on the "best possible" device # For example , if you have a GPU and a CPU avaiable , and the operation # has a GPU implementation , TensorFlow will choose the GPU import tensorflow as tf weights = tf.random_normal([2, 2, 3]) with tf.device("/device:CPU:0"): # operations created in this context will be pinned to the CPU img = tf.cast(tf.image.decode_jpeg(tf.read_file("img.jpg")), dtype=tf.float32) with tf.device("/device:GPU:0"): # Operations created in this context will be pinned to the GPU result = tf.matmul(weights, img)