鍓嶈█
澶у濂斤紝鎴戞槸闃垮厜銆?/p>
鏈笓鏍忔暣鐞嗕簡銆奝yTorch娣卞害瀛︿範椤圭洰瀹炴垬100渚嬨€嬶紝鍐呭寘鍚簡鍚勭涓嶅悓鐨勬繁搴﹀涔犻」鐩紝鍖呭惈椤圭洰鍘熺悊浠ュ強婧愮爜锛屾瘡涓€涓」鐩疄渚嬮兘闄勫甫鏈夊畬鏁寸殑浠g爜+鏁版嵁闆嗐€?/p>
姝e湪鏇存柊涓瓇 鉁?/p>
馃毃 鎴戠殑椤圭洰鐜锛?/p>
- 骞冲彴锛歐indows10
- 璇█鐜锛歱ython3.7
- 缂栬瘧鍣細PyCharm
- PyTorch鐗堟湰锛?.8.1
馃挜 椤圭洰涓撴爮锛氥€怭yTorch娣卞害瀛︿範椤圭洰瀹炴垬100渚嬨€?/p>
涓€銆丟RU杩涜澶╂皵鍙樺寲鐨勬椂闂村簭鍒楅娴?/h2>
鐢变簬澶ф皵杩愬姩鏋佷负澶嶆潅,褰卞搷澶╂皵鐨勫洜绱犺緝澶氾紝鑰屼汉浠璇嗗ぇ姘旀湰韬繍鍔ㄧ殑鑳藉姏鏋佷负鏈夐檺,鍥犳澶╂皵棰勬姤姘村钩杈冧綆.棰勬姤鍛樺湪棰勬姤瀹炶返涓?姣忔棰勬姤鐨勮繃绋嬮兘鏋佷负澶嶆潅,闇€瑕佺患鍚堝垎鏋?骞堕鎶ュ悇姘旇薄瑕佺礌,姣斿娓╁害銆侀檷姘寸瓑.鐜伴樁娈?浠ュ線鏋佸皯鍑虹幇鐨勬瀬绔ぉ姘旂幇璞¤秺鏉ヨ秺澶?杩欐瀬澶у湴澧炲姞浜嗛鎶ョ殑闅惧害銆?br>
鏈」鐩娇鐢ㄥ惊鐜缁忕綉缁淕RU璁粌涓€涓綉缁滄ā鍨嬶紝鏉ラ娴嬪湪缁欏畾澶╂皵鍥犵礌涓嬶紝鍩庡競鐨勬俯搴﹀彉鍖栥€?/p>
浜屻€佹暟鎹泦浠嬬粛
涓€涓ぉ姘旀椂闂村簭鍒楁暟鎹泦锛屽畠鐢卞痉鍥借€舵嬁鐨勯┈鍏嬫€?鈥?鏅湕鍏嬬敓鐗╁湴鐞冨寲瀛︾爺绌舵墍鐨勬皵璞$珯璁板綍銆傚湪杩欎釜鏁版嵁闆嗕腑锛屾瘡 10 鍒嗛挓璁板綍 14 涓笉鍚岀殑閲忥紙姣斿姘旀俯銆佹皵鍘嬨€佹箍搴︺€侀鍚戠瓑锛夛紝鍏朵腑鍖呭惈2009-2016澶氬勾鐨勮褰曘€?br> 鏁版嵁闆嗕笅杞藉湴鍧€
涓夈€佸畾涔夌綉缁滅粨鏋?/h2>
GRU锛圙ate Recurrent Unit锛夋槸寰幆绁炵粡缃戠粶锛圧ecurrent Neural Network, RNN锛夌殑涓€绉嶃€傚拰LSTM锛圠ong-Short Term Memory锛変竴鏍凤紝涔熸槸涓轰簡瑙e喅闀挎湡璁板繂鍜屽弽鍚戜紶鎾腑鐨勬搴︾瓑闂鑰屾彁鍑烘潵鐨勩€?/p>
- 鏇存柊闂細瀹氫箟浜嗗墠闈㈣蹇嗕繚瀛樺埌褰撳墠鏃堕棿姝ョ殑閲忋€傚鏋滄垜浠皢閲嶇疆闂ㄨ缃负 1锛屾洿鏂伴棬璁剧疆涓?0
- 閲嶇疆闂細鏈川涓婃潵璇达紝閲嶇疆闂ㄤ富瑕佸喅瀹氫簡鍒板簳鏈夊灏戣繃鍘荤殑淇℃伅闇€瑕侀仐蹇?/li>
# 7.瀹氫箟LSTM缃戠粶
class GRU(nn.Module):
def __init__(self, input_dim, hidden_dim, num_layers, output_dim):
super(GRU, self).__init__()
self.hidden_dim = hidden_dim # 闅愬眰澶у皬
self.num_layers = num_layers # LSTM灞傛暟
# input_dim涓虹壒寰佺淮搴︼紝灏辨槸姣忎釜鏃堕棿鐐瑰搴旂殑鐗瑰緛鏁伴噺锛岃繖閲屼负14
self.gru = nn.GRU(input_dim, hidden_dim, num_layers, batch_first=True)
self.fc = nn.Linear(hidden_dim, output_dim)
鍥涖€佽缁冪綉缁?/h2>
model = GRU(input_dim, hidden_dim, num_layers, output_dim) # 瀹氫箟LSTM缃戠粶
loss_function = nn.MSELoss() # 瀹氫箟鎹熷け鍑芥暟
optimizer = torch.optim.Adam(model.parameters(), lr=0.01) # 瀹氫箟浼樺寲鍣?
# 8.妯″瀷璁粌
for epoch in range(epochs):
model.train()
running_loss = 0
train_bar = tqdm(train_loader) # 褰㈡垚杩涘害鏉?
for data in train_bar:
x_train, y_train = data # 瑙e寘杩唬鍣ㄤ腑鐨刋鍜孻
optimizer.zero_grad()
y_train_pred = model(x_train)
loss = loss_function(y_train_pred, y_train.reshape(-1, 1))
loss.backward()
optimizer.step()
running_loss += loss.item()
train_bar.desc = "train epoch[{}/{}] loss:{:.3f}".format(epoch + 1,
epochs,
loss)
# 妯″瀷楠岃瘉
model.eval()
test_loss = 0
with torch.no_grad():
test_bar = tqdm(test_loader)
for data in test_bar:
x_test, y_test = data
y_test_pred = model(x_test)
test_loss = loss_function(y_test_pred, y_test.reshape(-1, 1))
if test_loss < best_loss:
best_loss = test_loss
torch.save(model.state_dict(), save_path)
print('Finished Training')
瀹屾暣婧愮爜
銆怭yTorch娣卞害瀛︿範椤圭洰瀹炴垬100渚嬨€戔€斺€?浣跨敤GRU杩涜澶╂皵鍙樺寲鐨勬椂闂村簭鍒楅娴?| 绗?1渚媉鍜?鍢熺殑鍗氬-CSDN鍗氬_gru棰勬祴
model = GRU(input_dim, hidden_dim, num_layers, output_dim) # 瀹氫箟LSTM缃戠粶
loss_function = nn.MSELoss() # 瀹氫箟鎹熷け鍑芥暟
optimizer = torch.optim.Adam(model.parameters(), lr=0.01) # 瀹氫箟浼樺寲鍣?
# 8.妯″瀷璁粌
for epoch in range(epochs):
model.train()
running_loss = 0
train_bar = tqdm(train_loader) # 褰㈡垚杩涘害鏉?
for data in train_bar:
x_train, y_train = data # 瑙e寘杩唬鍣ㄤ腑鐨刋鍜孻
optimizer.zero_grad()
y_train_pred = model(x_train)
loss = loss_function(y_train_pred, y_train.reshape(-1, 1))
loss.backward()
optimizer.step()
running_loss += loss.item()
train_bar.desc = "train epoch[{}/{}] loss:{:.3f}".format(epoch + 1,
epochs,
loss)
# 妯″瀷楠岃瘉
model.eval()
test_loss = 0
with torch.no_grad():
test_bar = tqdm(test_loader)
for data in test_bar:
x_test, y_test = data
y_test_pred = model(x_test)
test_loss = loss_function(y_test_pred, y_test.reshape(-1, 1))
if test_loss < best_loss:
best_loss = test_loss
torch.save(model.state_dict(), save_path)
print('Finished Training')
瀹屾暣婧愮爜
銆怭yTorch娣卞害瀛︿範椤圭洰瀹炴垬100渚嬨€戔€斺€?浣跨敤GRU杩涜澶╂皵鍙樺寲鐨勬椂闂村簭鍒楅娴?| 绗?1渚媉鍜?鍢熺殑鍗氬-CSDN鍗氬_gru棰勬祴