tensorflow 学习手册
tensorflow 学习手册1:https://cloud.tencent.com/developer/section/1475687
tensorflow 学习手册2:https://data-flair.training/blogs/tensorflow-wide-and-deep-learning/
详细的 op 数据操作
https://leonardoaraujosantos.gitbooks.io/artificial-inteligence/content/convolution_layer.html
intel optimized tensorflow, include VNNI
https://www.intel.ai/framework-optimizations
https://software.intel.com/en-us/frameworks/tensorflow
https://software.intel.com/en-us/articles/intel-optimization-for-tensorflow-installation-guide
Tensoflow WDL Training
# training
I0414 15:15:04.045984 140353775761216 basic_session_run_hooks.py:262] average_loss = 5.8287296, loss = 233.14919
I0414 15:15:04.046458 140353775761216 basic_session_run_hooks.py:262] loss = 233.14919, step = 1
I0414 15:15:04.709909 140353775761216 basic_session_run_hooks.py:692] global_step/sec: 150.54
I0414 15:15:04.710541 140353775761216 basic_session_run_hooks.py:260] average_loss = 0.49428934, loss = 19.771574 (0.665 sec)
I0414 15:15:04.710799 140353775761216 basic_session_run_hooks.py:260] loss = 19.771574, step = 101 (0.664 sec)
I0414 15:15:04.985524 140353775761216 basic_session_run_hooks.py:692] global_step/sec: 362.76
...
I0414 15:19:24.245445 140353775761216 basic_session_run_hooks.py:692] global_step/sec: 369.177
I0414 15:19:24.245903 140353775761216 basic_session_run_hooks.py:260] average_loss = 0.26652536, loss = 10.661015 (0.271 sec)
I0414 15:19:24.246119 140353775761216 basic_session_run_hooks.py:260] loss = 10.661015, step = 32552 (0.271 sec)
step: one step is one mini-batch_size training
loss: mean loss per mini-batch, loss sum of batch_size sample
average_loss: loss / batch_size
global_step/sec: 100 / runtime from step 1 to step 101
epoch: one time of run all sample
# evaluation
I0414 15:19:28.353387 140353775761216 wide_deep_run_loop.py:118] Results at epoch 38 / 40
I0414 15:19:28.353387 140353775761216 wide_deep_run_loop.py:118] Results at epoch 40 / 40
I0414 15:19:28.353549 140353775761216 wide_deep_run_loop.py:122] accuracy: 0.8573798
I0414 15:19:28.353610 140353775761216 wide_deep_run_loop.py:122] accuracy_baseline: 0.76377374
I0414 15:19:28.353667 140353775761216 wide_deep_run_loop.py:122] auc: 0.90705955
I0414 15:19:28.353724 140353775761216 wide_deep_run_loop.py:122] auc_precision_recall: 0.7749401
I0414 15:19:28.353781 140353775761216 wide_deep_run_loop.py:122] average_loss: 0.316328
I0414 15:19:28.353847 140353775761216 wide_deep_run_loop.py:122] global_step: 32580
I0414 15:19:28.353904 140353775761216 wide_deep_run_loop.py:122] label/mean: 0.23622628
I0414 15:19:28.353960 140353775761216 wide_deep_run_loop.py:122] loss: 12.622883
I0414 15:19:28.354016 140353775761216 wide_deep_run_loop.py:122] precision: 0.7577808
I0414 15:19:28.354072 140353775761216 wide_deep_run_loop.py:122] prediction/mean: 0.2373067
I0414 15:19:28.354127 140353775761216 wide_deep_run_loop.py:122] recall: 0.58242327
区分accuracy、auc、loss、precision、recall概念
https://machinelearningmastery.com/how-to-calculate-precision-recall-f1-and-more-for-deep-learning-models/
tensorflow evaluate API
https://www.tensorflow.org/api_docs/python/tf/keras/metrics
https://www.tensorflow.org/api_docs/python/tf/estimator/DNNClassifier
tensorflow evaluate API 介绍
https://keras-cn.readthedocs.io/en/latest/models/model/
https://keras.io/models/model/
grappler
https://www.tensorflow.org/guide/graph_optimization
https://web.stanford.edu/class/cs245/slides/TFGraphOptimizationsStanford.pdf
https://www.cnblogs.com/cx2016/p/11385479.html