當前位置:編程學習大全網 - 編程語言 - 怎麽用python寫tensorflow

怎麽用python寫tensorflow

開始使用

TensorFlow並不是壹個純粹的神經網絡框架, 而是使用數據流圖進行數值分析的框架.

TensorFlow使用有向圖(graph)表示壹個計算任務.圖的節點稱為ops(operations)表示對數據的處理,圖的邊flow 描述數據的流向.

該框架計算過程就是處理tensor組成的流. 這也是TensorFlow名稱的來源.

TensorFlow使用tensor表示數據. tensor意為張量即高維數組,在python中使用numpy.ndarray表示.

TensorFlow使用Session執行圖, 使用Variable維護狀態.tf.constant是只能輸出的ops, 常用作數據源.

下面我們構建壹個只有兩個constant做輸入, 然後進行矩陣乘的簡單圖:

from tensorflow import Session, device, constant, matmul'''構建壹個只有兩個constant做輸入, 然後進行矩陣乘的簡單圖:'''#如果不使用with session()語句, 需要手動執行session.close().

#with device設備指定了執行計算的設備:

# "/cpu:0": 機器的 CPU.

# "/gpu:0": 機器的第壹個 GPU, 如果有的話.

# "/gpu:1": 機器的第二個 GPU, 以此類推.

with Session() as session: ?# 創建執行圖的上下文

with device('/cpu:0'): ?# 指定運算設備

mat1 = constant([[3, 3]]) ?# 創建源節點

mat2 = constant([[2], [2]])

product = matmul(mat1, mat2) # 指定節點的前置節點, 創建圖

result = session.run(product) # 執行計算 print(result)123456789101112131415161718

下面使用Variable做壹個計數器:

from tensorflow import Session, constant, Variable, add, assign, initialize_all_variables

state = Variable(0, name='counter') # 創建計數器one = constant(1) # 創建數據源: 1val = add(state, one) # 創建新值節點update = assign(state, val) # 更新計數器setup = initialize_all_variables() # 初始化Variablewith Session() as session:

session.run(setup) # 執行初始化

print(session.run(state)) # 輸出初值

for i in range(3):

session.run(update) # 執行更新

print(session.run(state)) # 輸出計數器值12345678910111213

在使用變量前必須運行initialize_all_variables()返回的圖, 運行Variable節點將返回變量的值.

本示例中將構建圖的過程寫在了上下文之外, 而且沒有指定運行設備.

上面示例中session.run只接受壹個op作為參數, 實際上run可以接受op列表作為輸入:

session.run([op1, op2])1

上述示例壹直使用constant作為數據源, feed可以在運行時動態地輸入數據:

from tensorflow import Session, placeholder, mul, float32

input1 = placeholder(float32)

input2 = placeholder(float32)

output = mul(input1, input2)with Session() as session: print session.run(output, feed_dict={input1: [3], input2: [2]})1234567

實現壹個簡單神經網絡

神經網絡是應用廣泛的機器學習模型, 關於神經網絡的原理可以參見這篇隨筆, 或者在tensorflow playground上體驗壹下在線demo.

首先定義壹個BPNeuralNetwork類:

class BPNeuralNetwork:

def __init__(self):

self.session = tf.Session()

self.input_layer = None

self.label_layer = None

self.loss = None

self.trainer = None

self.layers = [] def __del__(self):

self.session.close()1234567891011

編寫壹個生成單層神經網絡函數,每層神經元用壹個數據流圖表示.使用壹個Variable矩陣表示與前置神經元的連接權重, 另壹個Variable向量表示偏置值, 並為該層設置壹個激勵函數.

def make_layer(inputs, in_size, out_size, activate=None):

weights = tf.Variable(tf.random_normal([in_size, out_size]))

basis = tf.Variable(tf.zeros([1, out_size]) + 0.1)

result = tf.matmul(inputs, weights) + basis if activate is None: return result else: return activate(result)12345678

使用placeholder作為輸入層.

self.input_layer = tf.placeholder(tf.float32, [None, 2])1

placeholder的第二個參數為張量的形狀, [None, 1]表示行數不限, 列數為1的二維數組, 含義與numpy.array.shape相同.這裏, self.input_layer被定義為接受二維輸入的輸入層.

同樣使用placeholder表示訓練數據的標簽:

self.label_layer = tf.placeholder(tf.float32, [None, 1])1

使用make_layer為神經網絡定義兩個隱含層, 並用最後壹層作為輸出層:

self.loss = tf.reduce_mean(tf.reduce_sum(tf.square((self.label_layer - self.layers[1])), reduction_indices=[1]))1

tf.train提供了壹些優化器, 可以用來訓練神經網絡.以損失函數最小化為目標:

self.trainer = tf.train.GradientDescentOptimizer(learn_rate).minimize(self.loss)1

使用Session運行神經網絡模型:

initer = tf.initialize_all_variables()# do trainingself.session.run(initer)

for i in range(limit):

self.session.run(self.trainer, feed_dict={self.input_layer: cases, self.label_layer: labels})12345

使用訓練好的模型進行預測:

self.session.run(self.layers[-1], feed_dict={self.input_layer: case})1

完整代碼:

import tensorflow as tfimport numpy as npdef make_layer(inputs, in_size, out_size, activate=None):

weights = tf.Variable(tf.random_normal([in_size, out_size]))

basis = tf.Variable(tf.zeros([1, out_size]) + 0.1)

result = tf.matmul(inputs, weights) + basis if activate is None: return result else: return activate(result)class BPNeuralNetwork:

def __init__(self):

self.session = tf.Session()

self.input_layer = None

self.label_layer = None

self.loss = None

self.optimizer = None

self.layers = [] def __del__(self):

self.session.close() def train(self, cases, labels, limit=100, learn_rate=0.05):

# 構建網絡

self.input_layer = tf.placeholder(tf.float32, [None, 2])

self.label_layer = tf.placeholder(tf.float32, [None, 1])

self.layers.append(make_layer(self.input_layer, 2, 10, activate=tf.nn.relu))

self.layers.append(make_layer(self.layers[0], 10, 2, activate=None))

self.loss = tf.reduce_mean(tf.reduce_sum(tf.square((self.label_layer - self.layers[1])), reduction_indices=[1]))

self.optimizer = tf.train.GradientDescentOptimizer(learn_rate).minimize(self.loss)

initer = tf.initialize_all_variables() # 做訓練

self.session.run(initer) for i in range(limit):

self.session.run(self.optimizer, feed_dict={self.input_layer: cases, self.label_layer: labels}) def predict(self, case):

return self.session.run(self.layers[-1], feed_dict={self.input_layer: case}) def test(self):

x_data = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])

y_data = np.array([[0, 1, 1, 0]]).transpose()

test_data = np.array([[0, 1]])

self.train(x_data, y_data)

print(self.predict(test_data))

nn = BPNeuralNetwork()

nn.test()12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152

上述模型雖然簡單但是使用不靈活, 作者采用同樣的思想實現了壹個可以自定義輸入輸出維數以及多層隱含神經元的網絡, 可以參見dynamic_bpnn.py

import tensorflow as tfimport numpy as npdef make_layer(inputs, in_size, out_size, activate=None):

weights = tf.Variable(tf.random_normal([in_size, out_size]))

basis = tf.Variable(tf.zeros([1, out_size]) + 0.1)

result = tf.matmul(inputs, weights) + basis if activate is None: return result else: return activate(result)class BPNeuralNetwork:

def __init__(self):

self.session = tf.Session()

self.loss = None

self.optimizer = None

self.input_n = 0

self.hidden_n = 0

self.hidden_size = []

self.output_n = 0

self.input_layer = None

self.hidden_layers = []

self.output_layer = None

self.label_layer = None

def __del__(self):

self.session.close() def setup(self, ni, nh, no):

# 設置參數個數

self.input_n = ni

self.hidden_n = len(nh) ?#隱藏層的數量

self.hidden_size = nh ?#每個隱藏層中的單元格數

self.output_n = no #構建輸入層

self.input_layer = tf.placeholder(tf.float32, [None, self.input_n]) #構建標簽層

self.label_layer = tf.placeholder(tf.float32, [None, self.output_n]) #構建隱藏層

in_size = self.input_n

out_size = self.hidden_size[0]

inputs = self.input_layer

self.hidden_layers.append(make_layer(inputs, in_size, out_size, activate=tf.nn.relu)) for i in range(self.hidden_n-1):

in_size = out_size

out_size = self.hidden_size[i+1]

inputs = self.hidden_layers[-1]

self.hidden_layers.append(make_layer(inputs, in_size, out_size, activate=tf.nn.relu)) #構建輸出層

self.output_layer = make_layer(self.hidden_layers[-1], self.hidden_size[-1], self.output_n) def train(self, cases, labels, limit=100, learn_rate=0.05):

self.loss = tf.reduce_mean(tf.reduce_sum(tf.square((self.label_layer - self.output_layer)), reduction_indices=[1]))

self.optimizer = tf.train.GradientDescentOptimizer(learn_rate).minimize(self.loss)

initer = tf.initialize_all_variables() #做訓練

self.session.run(initer) for i in range(limit):

self.session.run(self.optimizer, feed_dict={self.input_layer: cases, self.label_layer: labels}) def predict(self, case):

return self.session.run(self.output_layer, feed_dict={self.input_layer: case}) def test(self):

x_data = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])

y_data = np.array([[0, 1, 1, 0]]).transpose()

test_data = np.array([[0, 1]])

self.setup(2, [10, 5], 1)

self.train(x_data, y_data)

print(self.predict(test_data))

nn = BPNeuralNetwork()

nn.test()12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576

  • 上一篇:用jsp怎樣生成柱狀圖,餅狀圖,折線圖
  • 下一篇:杭州高中生造出新型無人駕駛自行車,其自動輔助系統有多牛?
  • copyright 2024編程學習大全網