• caffe_ssd学习-用自己的数据做训练


    几乎没用过linux操作系统,不懂shell编程,linux下shell+windows下UltraEdit勉勉强强生成了train.txt和val.txt期间各种错误辛酸不表,照着examples/imagenet/readme勉勉强强用自己的数据,按imagenet的训练方法,把reference_caffenet训起来了,小笔记本的风扇又开始呼呼呼的转了。

    跑了一晚上,小笔记本憋了,还是报错(syncedmem.hpp:25 check failed:*ptr host allocation of size 191102976 failed)猜测还是内存不够的问题,相同的配置方式在台式机上能跑,早晨过来迭代到800次了:

      1 I1101 05:51:24.763746  2942 solver.cpp:243] Iteration 698, loss = 0.246704
      2 I1101 05:51:24.763829  2942 solver.cpp:259]     Train net output #0: loss = 0.246704 (* 1 = 0.246704 loss)
      3 I1101 05:51:24.763837  2942 sgd_solver.cpp:138] Iteration 698, lr = 0.01
      4 I1101 05:52:20.169235  2942 solver.cpp:243] Iteration 699, loss = 0.214295
      5 I1101 05:52:20.169353  2942 solver.cpp:259]     Train net output #0: loss = 0.214295 (* 1 = 0.214295 loss)
      6 I1101 05:52:20.169360  2942 sgd_solver.cpp:138] Iteration 699, lr = 0.01
      7 I1101 05:53:15.372921  2942 solver.cpp:243] Iteration 700, loss = 0.247836
      8 I1101 05:53:15.373028  2942 solver.cpp:259]     Train net output #0: loss = 0.247836 (* 1 = 0.247836 loss)
      9 I1101 05:53:15.373049  2942 sgd_solver.cpp:138] Iteration 700, lr = 0.01
     10 I1101 05:54:11.039271  2942 solver.cpp:243] Iteration 701, loss = 0.24083
     11 I1101 05:54:11.039353  2942 solver.cpp:259]     Train net output #0: loss = 0.24083 (* 1 = 0.24083 loss)
     12 I1101 05:54:11.039361  2942 sgd_solver.cpp:138] Iteration 701, lr = 0.01
     13 I1101 05:55:06.733603  2942 solver.cpp:243] Iteration 702, loss = 0.185371
     14 I1101 05:55:06.733696  2942 solver.cpp:259]     Train net output #0: loss = 0.185371 (* 1 = 0.185371 loss)
     15 I1101 05:55:06.733716  2942 sgd_solver.cpp:138] Iteration 702, lr = 0.01
     16 I1101 05:56:02.576714  2942 solver.cpp:243] Iteration 703, loss = 0.154825
     17 I1101 05:56:02.576802  2942 solver.cpp:259]     Train net output #0: loss = 0.154825 (* 1 = 0.154825 loss)
     18 I1101 05:56:02.576810  2942 sgd_solver.cpp:138] Iteration 703, lr = 0.01
     19 I1101 05:56:58.484149  2942 solver.cpp:243] Iteration 704, loss = 0.222496
     20 I1101 05:56:58.484272  2942 solver.cpp:259]     Train net output #0: loss = 0.222496 (* 1 = 0.222496 loss)
     21 I1101 05:56:58.484292  2942 sgd_solver.cpp:138] Iteration 704, lr = 0.01
     22 I1101 05:57:53.968674  2942 solver.cpp:243] Iteration 705, loss = 0.223804
     23 I1101 05:57:53.968770  2942 solver.cpp:259]     Train net output #0: loss = 0.223804 (* 1 = 0.223804 loss)
     24 I1101 05:57:53.968789  2942 sgd_solver.cpp:138] Iteration 705, lr = 0.01
     25 I1101 05:58:49.514394  2942 solver.cpp:243] Iteration 706, loss = 0.178994
     26 I1101 05:58:49.514477  2942 solver.cpp:259]     Train net output #0: loss = 0.178994 (* 1 = 0.178994 loss)
     27 I1101 05:58:49.514482  2942 sgd_solver.cpp:138] Iteration 706, lr = 0.01
     28 I1101 05:59:44.914528  2942 solver.cpp:243] Iteration 707, loss = 0.231146
     29 I1101 05:59:44.914618  2942 solver.cpp:259]     Train net output #0: loss = 0.231146 (* 1 = 0.231146 loss)
     30 I1101 05:59:44.914625  2942 sgd_solver.cpp:138] Iteration 707, lr = 0.01
     31 I1101 06:00:40.380048  2942 solver.cpp:243] Iteration 708, loss = 0.2585
     32 I1101 06:00:40.380169  2942 solver.cpp:259]     Train net output #0: loss = 0.2585 (* 1 = 0.2585 loss)
     33 I1101 06:00:40.380188  2942 sgd_solver.cpp:138] Iteration 708, lr = 0.01
     34 I1101 06:01:35.776782  2942 solver.cpp:243] Iteration 709, loss = 0.213343
     35 I1101 06:01:35.776881  2942 solver.cpp:259]     Train net output #0: loss = 0.213343 (* 1 = 0.213343 loss)
     36 I1101 06:01:35.776888  2942 sgd_solver.cpp:138] Iteration 709, lr = 0.01
     37 I1101 06:02:31.642572  2942 solver.cpp:243] Iteration 710, loss = 0.209495
     38 I1101 06:02:31.642648  2942 solver.cpp:259]     Train net output #0: loss = 0.209495 (* 1 = 0.209495 loss)
     39 I1101 06:02:31.642654  2942 sgd_solver.cpp:138] Iteration 710, lr = 0.01
     40 I1101 06:03:27.265415  2942 solver.cpp:243] Iteration 711, loss = 0.222363
     41 I1101 06:03:27.265513  2942 solver.cpp:259]     Train net output #0: loss = 0.222363 (* 1 = 0.222363 loss)
     42 I1101 06:03:27.265522  2942 sgd_solver.cpp:138] Iteration 711, lr = 0.01
     43 I1101 06:04:22.963587  2942 solver.cpp:243] Iteration 712, loss = 0.156492
     44 I1101 06:04:22.963680  2942 solver.cpp:259]     Train net output #0: loss = 0.156492 (* 1 = 0.156492 loss)
     45 I1101 06:04:22.963701  2942 sgd_solver.cpp:138] Iteration 712, lr = 0.01
     46 I1101 06:05:18.575387  2942 solver.cpp:243] Iteration 713, loss = 0.23963
     47 I1101 06:05:18.575475  2942 solver.cpp:259]     Train net output #0: loss = 0.23963 (* 1 = 0.23963 loss)
     48 I1101 06:05:18.575484  2942 sgd_solver.cpp:138] Iteration 713, lr = 0.01
     49 I1101 06:06:13.736877  2942 solver.cpp:243] Iteration 714, loss = 0.198127
     50 I1101 06:06:13.736976  2942 solver.cpp:259]     Train net output #0: loss = 0.198127 (* 1 = 0.198127 loss)
     51 I1101 06:06:13.736984  2942 sgd_solver.cpp:138] Iteration 714, lr = 0.01
     52 I1101 06:07:09.226873  2942 solver.cpp:243] Iteration 715, loss = 0.211781
     53 I1101 06:07:09.226959  2942 solver.cpp:259]     Train net output #0: loss = 0.211781 (* 1 = 0.211781 loss)
     54 I1101 06:07:09.226966  2942 sgd_solver.cpp:138] Iteration 715, lr = 0.01
     55 I1101 06:08:04.730242  2942 solver.cpp:243] Iteration 716, loss = 0.250581
     56 I1101 06:08:04.730329  2942 solver.cpp:259]     Train net output #0: loss = 0.250581 (* 1 = 0.250581 loss)
     57 I1101 06:08:04.730335  2942 sgd_solver.cpp:138] Iteration 716, lr = 0.01
     58 I1101 06:09:00.274008  2942 solver.cpp:243] Iteration 717, loss = 0.213366
     59 I1101 06:09:00.274089  2942 solver.cpp:259]     Train net output #0: loss = 0.213366 (* 1 = 0.213366 loss)
     60 I1101 06:09:00.274096  2942 sgd_solver.cpp:138] Iteration 717, lr = 0.01
     61 I1101 06:09:55.551977  2942 solver.cpp:243] Iteration 718, loss = 0.229803
     62 I1101 06:09:55.552062  2942 solver.cpp:259]     Train net output #0: loss = 0.229803 (* 1 = 0.229803 loss)
     63 I1101 06:09:55.552070  2942 sgd_solver.cpp:138] Iteration 718, lr = 0.01
     64 I1101 06:10:51.295166  2942 solver.cpp:243] Iteration 719, loss = 0.182805
     65 I1101 06:10:51.295260  2942 solver.cpp:259]     Train net output #0: loss = 0.182805 (* 1 = 0.182805 loss)
     66 I1101 06:10:51.295281  2942 sgd_solver.cpp:138] Iteration 719, lr = 0.01
     67 I1101 06:11:46.892568  2942 solver.cpp:243] Iteration 720, loss = 0.174111
     68 I1101 06:11:46.892639  2942 solver.cpp:259]     Train net output #0: loss = 0.174111 (* 1 = 0.174111 loss)
     69 I1101 06:11:46.892647  2942 sgd_solver.cpp:138] Iteration 720, lr = 0.01
     70 I1101 06:12:42.373394  2942 solver.cpp:243] Iteration 721, loss = 0.159915
     71 I1101 06:12:42.373476  2942 solver.cpp:259]     Train net output #0: loss = 0.159915 (* 1 = 0.159915 loss)
     72 I1101 06:12:42.373482  2942 sgd_solver.cpp:138] Iteration 721, lr = 0.01
     73 I1101 06:13:37.606986  2942 solver.cpp:243] Iteration 722, loss = 0.194667
     74 I1101 06:13:37.607105  2942 solver.cpp:259]     Train net output #0: loss = 0.194667 (* 1 = 0.194667 loss)
     75 I1101 06:13:37.607125  2942 sgd_solver.cpp:138] Iteration 722, lr = 0.01
     76 I1101 06:14:32.550334  2942 solver.cpp:243] Iteration 723, loss = 0.192629
     77 I1101 06:14:32.550433  2942 solver.cpp:259]     Train net output #0: loss = 0.192629 (* 1 = 0.192629 loss)
     78 I1101 06:14:32.550442  2942 sgd_solver.cpp:138] Iteration 723, lr = 0.01
     79 I1101 06:15:27.603406  2942 solver.cpp:243] Iteration 724, loss = 0.189146
     80 I1101 06:15:27.603489  2942 solver.cpp:259]     Train net output #0: loss = 0.189146 (* 1 = 0.189146 loss)
     81 I1101 06:15:27.603497  2942 sgd_solver.cpp:138] Iteration 724, lr = 0.01
     82 I1101 06:16:22.925781  2942 solver.cpp:243] Iteration 725, loss = 0.2837
     83 I1101 06:16:22.925882  2942 solver.cpp:259]     Train net output #0: loss = 0.2837 (* 1 = 0.2837 loss)
     84 I1101 06:16:22.925902  2942 sgd_solver.cpp:138] Iteration 725, lr = 0.01
     85 I1101 06:17:18.304738  2942 solver.cpp:243] Iteration 726, loss = 0.22247
     86 I1101 06:17:18.304850  2942 solver.cpp:259]     Train net output #0: loss = 0.22247 (* 1 = 0.22247 loss)
     87 I1101 06:17:18.304870  2942 sgd_solver.cpp:138] Iteration 726, lr = 0.01
     88 I1101 06:18:13.775182  2942 solver.cpp:243] Iteration 727, loss = 0.22343
     89 I1101 06:18:13.775266  2942 solver.cpp:259]     Train net output #0: loss = 0.22343 (* 1 = 0.22343 loss)
     90 I1101 06:18:13.775274  2942 sgd_solver.cpp:138] Iteration 727, lr = 0.01
     91 I1101 06:19:09.986521  2942 solver.cpp:243] Iteration 728, loss = 0.208602
     92 I1101 06:19:09.986620  2942 solver.cpp:259]     Train net output #0: loss = 0.208602 (* 1 = 0.208602 loss)
     93 I1101 06:19:09.986629  2942 sgd_solver.cpp:138] Iteration 728, lr = 0.01
     94 I1101 06:20:05.922881  2942 solver.cpp:243] Iteration 729, loss = 0.179899
     95 I1101 06:20:05.922969  2942 solver.cpp:259]     Train net output #0: loss = 0.179899 (* 1 = 0.179899 loss)
     96 I1101 06:20:05.922976  2942 sgd_solver.cpp:138] Iteration 729, lr = 0.01
     97 I1101 06:21:01.568653  2942 solver.cpp:243] Iteration 730, loss = 0.25694
     98 I1101 06:21:01.568696  2942 solver.cpp:259]     Train net output #0: loss = 0.25694 (* 1 = 0.25694 loss)
     99 I1101 06:21:01.568701  2942 sgd_solver.cpp:138] Iteration 730, lr = 0.01
    100 I1101 06:21:57.061185  2942 solver.cpp:243] Iteration 731, loss = 0.184521
    101 I1101 06:21:57.061267  2942 solver.cpp:259]     Train net output #0: loss = 0.184521 (* 1 = 0.184521 loss)
    102 I1101 06:21:57.061275  2942 sgd_solver.cpp:138] Iteration 731, lr = 0.01
    103 I1101 06:22:52.319211  2942 solver.cpp:243] Iteration 732, loss = 0.214978
    104 I1101 06:22:52.319324  2942 solver.cpp:259]     Train net output #0: loss = 0.214978 (* 1 = 0.214978 loss)
    105 I1101 06:22:52.319332  2942 sgd_solver.cpp:138] Iteration 732, lr = 0.01
    106 I1101 06:23:47.861532  2942 solver.cpp:243] Iteration 733, loss = 0.166787
    107 I1101 06:23:47.861619  2942 solver.cpp:259]     Train net output #0: loss = 0.166787 (* 1 = 0.166787 loss)
    108 I1101 06:23:47.861626  2942 sgd_solver.cpp:138] Iteration 733, lr = 0.01
    109 I1101 06:24:43.277447  2942 solver.cpp:243] Iteration 734, loss = 0.245544
    110 I1101 06:24:43.277559  2942 solver.cpp:259]     Train net output #0: loss = 0.245544 (* 1 = 0.245544 loss)
    111 I1101 06:24:43.277565  2942 sgd_solver.cpp:138] Iteration 734, lr = 0.01
    112 I1101 06:25:38.757647  2942 solver.cpp:243] Iteration 735, loss = 0.200957
    113 I1101 06:25:38.757745  2942 solver.cpp:259]     Train net output #0: loss = 0.200957 (* 1 = 0.200957 loss)
    114 I1101 06:25:38.757766  2942 sgd_solver.cpp:138] Iteration 735, lr = 0.01
    115 I1101 06:26:34.590348  2942 solver.cpp:243] Iteration 736, loss = 0.206711
    116 I1101 06:26:34.590428  2942 solver.cpp:259]     Train net output #0: loss = 0.206711 (* 1 = 0.206711 loss)
    117 I1101 06:26:34.590435  2942 sgd_solver.cpp:138] Iteration 736, lr = 0.01
    118 I1101 06:27:30.571000  2942 solver.cpp:243] Iteration 737, loss = 0.190287
    119 I1101 06:27:30.571082  2942 solver.cpp:259]     Train net output #0: loss = 0.190287 (* 1 = 0.190287 loss)
    120 I1101 06:27:30.571089  2942 sgd_solver.cpp:138] Iteration 737, lr = 0.01
    121 I1101 06:28:26.604413  2942 solver.cpp:243] Iteration 738, loss = 0.27267
    122 I1101 06:28:26.604490  2942 solver.cpp:259]     Train net output #0: loss = 0.27267 (* 1 = 0.27267 loss)
    123 I1101 06:28:26.604509  2942 sgd_solver.cpp:138] Iteration 738, lr = 0.01
    124 I1101 06:29:22.135064  2942 solver.cpp:243] Iteration 739, loss = 0.259939
    125 I1101 06:29:22.135135  2942 solver.cpp:259]     Train net output #0: loss = 0.259939 (* 1 = 0.259939 loss)
    126 I1101 06:29:22.135143  2942 sgd_solver.cpp:138] Iteration 739, lr = 0.01
    127 I1101 06:30:17.477607  2942 solver.cpp:243] Iteration 740, loss = 0.180358
    128 I1101 06:30:17.477692  2942 solver.cpp:259]     Train net output #0: loss = 0.180358 (* 1 = 0.180358 loss)
    129 I1101 06:30:17.477699  2942 sgd_solver.cpp:138] Iteration 740, lr = 0.01
    130 I1101 06:31:12.490366  2942 solver.cpp:243] Iteration 741, loss = 0.210995
    131 I1101 06:31:12.490449  2942 solver.cpp:259]     Train net output #0: loss = 0.210995 (* 1 = 0.210995 loss)
    132 I1101 06:31:12.490468  2942 sgd_solver.cpp:138] Iteration 741, lr = 0.01
    133 I1101 06:32:07.610287  2942 solver.cpp:243] Iteration 742, loss = 0.240796
    134 I1101 06:32:07.610374  2942 solver.cpp:259]     Train net output #0: loss = 0.240796 (* 1 = 0.240796 loss)
    135 I1101 06:32:07.610383  2942 sgd_solver.cpp:138] Iteration 742, lr = 0.01
    136 I1101 06:33:02.604507  2942 solver.cpp:243] Iteration 743, loss = 0.242676
    137 I1101 06:33:02.604640  2942 solver.cpp:259]     Train net output #0: loss = 0.242676 (* 1 = 0.242676 loss)
    138 I1101 06:33:02.604648  2942 sgd_solver.cpp:138] Iteration 743, lr = 0.01
    139 I1101 06:33:57.804772  2942 solver.cpp:243] Iteration 744, loss = 0.213677
    140 I1101 06:33:57.804877  2942 solver.cpp:259]     Train net output #0: loss = 0.213677 (* 1 = 0.213677 loss)
    141 I1101 06:33:57.804898  2942 sgd_solver.cpp:138] Iteration 744, lr = 0.01
    142 I1101 06:34:53.220233  2942 solver.cpp:243] Iteration 745, loss = 0.164903
    143 I1101 06:34:53.220304  2942 solver.cpp:259]     Train net output #0: loss = 0.164903 (* 1 = 0.164903 loss)
    144 I1101 06:34:53.220310  2942 sgd_solver.cpp:138] Iteration 745, lr = 0.01
    145 I1101 06:35:48.960155  2942 solver.cpp:243] Iteration 746, loss = 0.229432
    146 I1101 06:35:48.960199  2942 solver.cpp:259]     Train net output #0: loss = 0.229432 (* 1 = 0.229432 loss)
    147 I1101 06:35:48.960220  2942 sgd_solver.cpp:138] Iteration 746, lr = 0.01
    148 I1101 06:36:44.706097  2942 solver.cpp:243] Iteration 747, loss = 0.164644
    149 I1101 06:36:44.706193  2942 solver.cpp:259]     Train net output #0: loss = 0.164644 (* 1 = 0.164644 loss)
    150 I1101 06:36:44.706212  2942 sgd_solver.cpp:138] Iteration 747, lr = 0.01
    151 I1101 06:37:40.333650  2942 solver.cpp:243] Iteration 748, loss = 0.190379
    152 I1101 06:37:40.333721  2942 solver.cpp:259]     Train net output #0: loss = 0.190379 (* 1 = 0.190379 loss)
    153 I1101 06:37:40.333729  2942 sgd_solver.cpp:138] Iteration 748, lr = 0.01
    154 I1101 06:38:35.466141  2942 solver.cpp:243] Iteration 749, loss = 0.19267
    155 I1101 06:38:35.466250  2942 solver.cpp:259]     Train net output #0: loss = 0.19267 (* 1 = 0.19267 loss)
    156 I1101 06:38:35.466259  2942 sgd_solver.cpp:138] Iteration 749, lr = 0.01
    157 I1101 06:39:30.480350  2942 solver.cpp:243] Iteration 750, loss = 0.183797
    158 I1101 06:39:30.480445  2942 solver.cpp:259]     Train net output #0: loss = 0.183797 (* 1 = 0.183797 loss)
    159 I1101 06:39:30.480453  2942 sgd_solver.cpp:138] Iteration 750, lr = 0.01
    160 I1101 06:40:25.350738  2942 solver.cpp:243] Iteration 751, loss = 0.159131
    161 I1101 06:40:25.350818  2942 solver.cpp:259]     Train net output #0: loss = 0.159131 (* 1 = 0.159131 loss)
    162 I1101 06:40:25.350826  2942 sgd_solver.cpp:138] Iteration 751, lr = 0.01
    163 I1101 06:41:20.152151  2942 solver.cpp:243] Iteration 752, loss = 0.228896
    164 I1101 06:41:20.152248  2942 solver.cpp:259]     Train net output #0: loss = 0.228896 (* 1 = 0.228896 loss)
    165 I1101 06:41:20.152256  2942 sgd_solver.cpp:138] Iteration 752, lr = 0.01
    166 I1101 06:42:15.041281  2942 solver.cpp:243] Iteration 753, loss = 0.18304
    167 I1101 06:42:15.041394  2942 solver.cpp:259]     Train net output #0: loss = 0.18304 (* 1 = 0.18304 loss)
    168 I1101 06:42:15.041402  2942 sgd_solver.cpp:138] Iteration 753, lr = 0.01
    169 I1101 06:43:10.346072  2942 solver.cpp:243] Iteration 754, loss = 0.156069
    170 I1101 06:43:10.346170  2942 solver.cpp:259]     Train net output #0: loss = 0.156069 (* 1 = 0.156069 loss)
    171 I1101 06:43:10.346177  2942 sgd_solver.cpp:138] Iteration 754, lr = 0.01
    172 I1101 06:44:05.998122  2942 solver.cpp:243] Iteration 755, loss = 0.182228
    173 I1101 06:44:05.998195  2942 solver.cpp:259]     Train net output #0: loss = 0.182228 (* 1 = 0.182228 loss)
    174 I1101 06:44:05.998214  2942 sgd_solver.cpp:138] Iteration 755, lr = 0.01
    175 I1101 06:45:01.561781  2942 solver.cpp:243] Iteration 756, loss = 0.216226
    176 I1101 06:45:01.561890  2942 solver.cpp:259]     Train net output #0: loss = 0.216226 (* 1 = 0.216226 loss)
    177 I1101 06:45:01.561898  2942 sgd_solver.cpp:138] Iteration 756, lr = 0.01
    178 I1101 06:45:56.949368  2942 solver.cpp:243] Iteration 757, loss = 0.18065
    179 I1101 06:45:56.949447  2942 solver.cpp:259]     Train net output #0: loss = 0.18065 (* 1 = 0.18065 loss)
    180 I1101 06:45:56.949455  2942 sgd_solver.cpp:138] Iteration 757, lr = 0.01
    181 I1101 06:46:52.247467  2942 solver.cpp:243] Iteration 758, loss = 0.182474
    182 I1101 06:46:52.247581  2942 solver.cpp:259]     Train net output #0: loss = 0.182474 (* 1 = 0.182474 loss)
    183 I1101 06:46:52.247588  2942 sgd_solver.cpp:138] Iteration 758, lr = 0.01
    184 I1101 06:47:47.383482  2942 solver.cpp:243] Iteration 759, loss = 0.212113
    185 I1101 06:47:47.383568  2942 solver.cpp:259]     Train net output #0: loss = 0.212113 (* 1 = 0.212113 loss)
    186 I1101 06:47:47.383574  2942 sgd_solver.cpp:138] Iteration 759, lr = 0.01
    187 I1101 06:48:42.570590  2942 solver.cpp:243] Iteration 760, loss = 0.206157
    188 I1101 06:48:42.570747  2942 solver.cpp:259]     Train net output #0: loss = 0.206157 (* 1 = 0.206157 loss)
    189 I1101 06:48:42.570770  2942 sgd_solver.cpp:138] Iteration 760, lr = 0.01
    190 I1101 06:49:37.778367  2942 solver.cpp:243] Iteration 761, loss = 0.201435
    191 I1101 06:49:37.778491  2942 solver.cpp:259]     Train net output #0: loss = 0.201435 (* 1 = 0.201435 loss)
    192 I1101 06:49:37.778497  2942 sgd_solver.cpp:138] Iteration 761, lr = 0.01
    193 I1101 06:50:32.906011  2942 solver.cpp:243] Iteration 762, loss = 0.232756
    194 I1101 06:50:32.906136  2942 solver.cpp:259]     Train net output #0: loss = 0.232756 (* 1 = 0.232756 loss)
    195 I1101 06:50:32.906154  2942 sgd_solver.cpp:138] Iteration 762, lr = 0.01
    196 I1101 06:51:28.507810  2942 solver.cpp:243] Iteration 763, loss = 0.239409
    197 I1101 06:51:28.507935  2942 solver.cpp:259]     Train net output #0: loss = 0.239409 (* 1 = 0.239409 loss)
    198 I1101 06:51:28.507941  2942 sgd_solver.cpp:138] Iteration 763, lr = 0.01
    199 I1101 06:52:24.117368  2942 solver.cpp:243] Iteration 764, loss = 0.210396
    200 I1101 06:52:24.117455  2942 solver.cpp:259]     Train net output #0: loss = 0.210396 (* 1 = 0.210396 loss)
    201 I1101 06:52:24.117462  2942 sgd_solver.cpp:138] Iteration 764, lr = 0.01
    202 I1101 06:53:19.973865  2942 solver.cpp:243] Iteration 765, loss = 0.213389
    203 I1101 06:53:19.973986  2942 solver.cpp:259]     Train net output #0: loss = 0.213389 (* 1 = 0.213389 loss)
    204 I1101 06:53:19.973994  2942 sgd_solver.cpp:138] Iteration 765, lr = 0.01
    205 I1101 06:54:15.469249  2942 solver.cpp:243] Iteration 766, loss = 0.176683
    206 I1101 06:54:15.469341  2942 solver.cpp:259]     Train net output #0: loss = 0.176683 (* 1 = 0.176683 loss)
    207 I1101 06:54:15.469347  2942 sgd_solver.cpp:138] Iteration 766, lr = 0.01
    208 I1101 06:55:10.433040  2942 solver.cpp:243] Iteration 767, loss = 0.175243
    209 I1101 06:55:10.433122  2942 solver.cpp:259]     Train net output #0: loss = 0.175243 (* 1 = 0.175243 loss)
    210 I1101 06:55:10.433130  2942 sgd_solver.cpp:138] Iteration 767, lr = 0.01
    211 I1101 06:56:05.749205  2942 solver.cpp:243] Iteration 768, loss = 0.240504
    212 I1101 06:56:05.749297  2942 solver.cpp:259]     Train net output #0: loss = 0.240504 (* 1 = 0.240504 loss)
    213 I1101 06:56:05.749305  2942 sgd_solver.cpp:138] Iteration 768, lr = 0.01
    214 I1101 06:57:00.961922  2942 solver.cpp:243] Iteration 769, loss = 0.196663
    215 I1101 06:57:00.962010  2942 solver.cpp:259]     Train net output #0: loss = 0.196663 (* 1 = 0.196663 loss)
    216 I1101 06:57:00.962018  2942 sgd_solver.cpp:138] Iteration 769, lr = 0.01
    217 I1101 06:57:56.258919  2942 solver.cpp:243] Iteration 770, loss = 0.180423
    218 I1101 06:57:56.259018  2942 solver.cpp:259]     Train net output #0: loss = 0.180423 (* 1 = 0.180423 loss)
    219 I1101 06:57:56.259026  2942 sgd_solver.cpp:138] Iteration 770, lr = 0.01
    220 I1101 06:58:51.617398  2942 solver.cpp:243] Iteration 771, loss = 0.175648
    221 I1101 06:58:51.617507  2942 solver.cpp:259]     Train net output #0: loss = 0.175648 (* 1 = 0.175648 loss)
    222 I1101 06:58:51.617527  2942 sgd_solver.cpp:138] Iteration 771, lr = 0.01
    223 I1101 06:59:47.129223  2942 solver.cpp:243] Iteration 772, loss = 0.217475
    224 I1101 06:59:47.129295  2942 solver.cpp:259]     Train net output #0: loss = 0.217475 (* 1 = 0.217475 loss)
    225 I1101 06:59:47.129302  2942 sgd_solver.cpp:138] Iteration 772, lr = 0.01
    226 I1101 07:00:42.674275  2942 solver.cpp:243] Iteration 773, loss = 0.172873
    227 I1101 07:00:42.674332  2942 solver.cpp:259]     Train net output #0: loss = 0.172873 (* 1 = 0.172873 loss)
    228 I1101 07:00:42.674340  2942 sgd_solver.cpp:138] Iteration 773, lr = 0.01
    229 I1101 07:01:38.446044  2942 solver.cpp:243] Iteration 774, loss = 0.20526
    230 I1101 07:01:38.446117  2942 solver.cpp:259]     Train net output #0: loss = 0.20526 (* 1 = 0.20526 loss)
    231 I1101 07:01:38.446125  2942 sgd_solver.cpp:138] Iteration 774, lr = 0.01
    232 I1101 07:02:33.842972  2942 solver.cpp:243] Iteration 775, loss = 0.164669
    233 I1101 07:02:33.843098  2942 solver.cpp:259]     Train net output #0: loss = 0.164669 (* 1 = 0.164669 loss)
    234 I1101 07:02:33.843106  2942 sgd_solver.cpp:138] Iteration 775, lr = 0.01
    235 I1101 07:03:28.843194  2942 solver.cpp:243] Iteration 776, loss = 0.123786
    236 I1101 07:03:28.843338  2942 solver.cpp:259]     Train net output #0: loss = 0.123786 (* 1 = 0.123786 loss)
    237 I1101 07:03:28.843358  2942 sgd_solver.cpp:138] Iteration 776, lr = 0.01
    238 I1101 07:04:24.223012  2942 solver.cpp:243] Iteration 777, loss = 0.152694
    239 I1101 07:04:24.223104  2942 solver.cpp:259]     Train net output #0: loss = 0.152694 (* 1 = 0.152694 loss)
    240 I1101 07:04:24.223112  2942 sgd_solver.cpp:138] Iteration 777, lr = 0.01
    241 I1101 07:05:19.547505  2942 solver.cpp:243] Iteration 778, loss = 0.16592
    242 I1101 07:05:19.547611  2942 solver.cpp:259]     Train net output #0: loss = 0.16592 (* 1 = 0.16592 loss)
    243 I1101 07:05:19.547618  2942 sgd_solver.cpp:138] Iteration 778, lr = 0.01
    244 I1101 07:06:14.945013  2942 solver.cpp:243] Iteration 779, loss = 0.131236
    245 I1101 07:06:14.945102  2942 solver.cpp:259]     Train net output #0: loss = 0.131236 (* 1 = 0.131236 loss)
    246 I1101 07:06:14.945109  2942 sgd_solver.cpp:138] Iteration 779, lr = 0.01
    247 I1101 07:07:10.377750  2942 solver.cpp:243] Iteration 780, loss = 0.180781
    248 I1101 07:07:10.377817  2942 solver.cpp:259]     Train net output #0: loss = 0.180781 (* 1 = 0.180781 loss)
    249 I1101 07:07:10.377825  2942 sgd_solver.cpp:138] Iteration 780, lr = 0.01
    250 I1101 07:08:06.142426  2942 solver.cpp:243] Iteration 781, loss = 0.200052
    251 I1101 07:08:06.142537  2942 solver.cpp:259]     Train net output #0: loss = 0.200052 (* 1 = 0.200052 loss)
    252 I1101 07:08:06.142545  2942 sgd_solver.cpp:138] Iteration 781, lr = 0.01
    253 I1101 07:09:01.782235  2942 solver.cpp:243] Iteration 782, loss = 0.166285
    254 I1101 07:09:01.782305  2942 solver.cpp:259]     Train net output #0: loss = 0.166285 (* 1 = 0.166285 loss)
    255 I1101 07:09:01.782312  2942 sgd_solver.cpp:138] Iteration 782, lr = 0.01
    256 I1101 07:09:57.450909  2942 solver.cpp:243] Iteration 783, loss = 0.204904
    257 I1101 07:09:57.451010  2942 solver.cpp:259]     Train net output #0: loss = 0.204904 (* 1 = 0.204904 loss)
    258 I1101 07:09:57.451030  2942 sgd_solver.cpp:138] Iteration 783, lr = 0.01
    259 I1101 07:10:52.858960  2942 solver.cpp:243] Iteration 784, loss = 0.143823
    260 I1101 07:10:52.859050  2942 solver.cpp:259]     Train net output #0: loss = 0.143823 (* 1 = 0.143823 loss)
    261 I1101 07:10:52.859056  2942 sgd_solver.cpp:138] Iteration 784, lr = 0.01
    262 I1101 07:11:48.006325  2942 solver.cpp:243] Iteration 785, loss = 0.158639
    263 I1101 07:11:48.006422  2942 solver.cpp:259]     Train net output #0: loss = 0.158639 (* 1 = 0.158639 loss)
    264 I1101 07:11:48.006443  2942 sgd_solver.cpp:138] Iteration 785, lr = 0.01
    265 I1101 07:12:43.566946  2942 solver.cpp:243] Iteration 786, loss = 0.157527
    266 I1101 07:12:43.567029  2942 solver.cpp:259]     Train net output #0: loss = 0.157527 (* 1 = 0.157527 loss)
    267 I1101 07:12:43.567036  2942 sgd_solver.cpp:138] Iteration 786, lr = 0.01
    268 I1101 07:13:38.747087  2942 solver.cpp:243] Iteration 787, loss = 0.229001
    269 I1101 07:13:38.747169  2942 solver.cpp:259]     Train net output #0: loss = 0.229001 (* 1 = 0.229001 loss)
    270 I1101 07:13:38.747176  2942 sgd_solver.cpp:138] Iteration 787, lr = 0.01
    271 I1101 07:14:34.269659  2942 solver.cpp:243] Iteration 788, loss = 0.166042
    272 I1101 07:14:34.269748  2942 solver.cpp:259]     Train net output #0: loss = 0.166042 (* 1 = 0.166042 loss)
    273 I1101 07:14:34.269755  2942 sgd_solver.cpp:138] Iteration 788, lr = 0.01
    274 I1101 07:15:29.537577  2942 solver.cpp:243] Iteration 789, loss = 0.212571
    275 I1101 07:15:29.537619  2942 solver.cpp:259]     Train net output #0: loss = 0.212571 (* 1 = 0.212571 loss)
    276 I1101 07:15:29.537626  2942 sgd_solver.cpp:138] Iteration 789, lr = 0.01
    277 I1101 07:16:25.185962  2942 solver.cpp:243] Iteration 790, loss = 0.177549
    278 I1101 07:16:25.186005  2942 solver.cpp:259]     Train net output #0: loss = 0.177549 (* 1 = 0.177549 loss)
    279 I1101 07:16:25.186012  2942 sgd_solver.cpp:138] Iteration 790, lr = 0.01
    280 I1101 07:17:20.694247  2942 solver.cpp:243] Iteration 791, loss = 0.219427
    281 I1101 07:17:20.694320  2942 solver.cpp:259]     Train net output #0: loss = 0.219427 (* 1 = 0.219427 loss)
    282 I1101 07:17:20.694329  2942 sgd_solver.cpp:138] Iteration 791, lr = 0.01
    283 I1101 07:18:16.576424  2942 solver.cpp:243] Iteration 792, loss = 0.184091
    284 I1101 07:18:16.576484  2942 solver.cpp:259]     Train net output #0: loss = 0.184091 (* 1 = 0.184091 loss)
    285 I1101 07:18:16.576506  2942 sgd_solver.cpp:138] Iteration 792, lr = 0.01
    286 I1101 07:19:11.834085  2942 solver.cpp:243] Iteration 793, loss = 0.182248
    287 I1101 07:19:11.834184  2942 solver.cpp:259]     Train net output #0: loss = 0.182248 (* 1 = 0.182248 loss)
    288 I1101 07:19:11.834192  2942 sgd_solver.cpp:138] Iteration 793, lr = 0.01
    289 I1101 07:20:06.932883  2942 solver.cpp:243] Iteration 794, loss = 0.138351
    290 I1101 07:20:06.932976  2942 solver.cpp:259]     Train net output #0: loss = 0.138351 (* 1 = 0.138351 loss)
    291 I1101 07:20:06.932982  2942 sgd_solver.cpp:138] Iteration 794, lr = 0.01
    292 I1101 07:21:02.166926  2942 solver.cpp:243] Iteration 795, loss = 0.131442
    293 I1101 07:21:02.167026  2942 solver.cpp:259]     Train net output #0: loss = 0.131442 (* 1 = 0.131442 loss)
    294 I1101 07:21:02.167033  2942 sgd_solver.cpp:138] Iteration 795, lr = 0.01
    295 I1101 07:21:57.211791  2942 solver.cpp:243] Iteration 796, loss = 0.177292
    296 I1101 07:21:57.211889  2942 solver.cpp:259]     Train net output #0: loss = 0.177292 (* 1 = 0.177292 loss)
    297 I1101 07:21:57.211910  2942 sgd_solver.cpp:138] Iteration 796, lr = 0.01
    298 I1101 07:22:52.467435  2942 solver.cpp:243] Iteration 797, loss = 0.163172
    299 I1101 07:22:52.467532  2942 solver.cpp:259]     Train net output #0: loss = 0.163172 (* 1 = 0.163172 loss)
    300 I1101 07:22:52.467540  2942 sgd_solver.cpp:138] Iteration 797, lr = 0.01
    301 I1101 07:23:47.584058  2942 solver.cpp:243] Iteration 798, loss = 0.1557
    302 I1101 07:23:47.584126  2942 solver.cpp:259]     Train net output #0: loss = 0.1557 (* 1 = 0.1557 loss)
    303 I1101 07:23:47.584133  2942 sgd_solver.cpp:138] Iteration 798, lr = 0.01
    304 I1101 07:24:42.980532  2942 solver.cpp:243] Iteration 799, loss = 0.158722
    305 I1101 07:24:42.980628  2942 solver.cpp:259]     Train net output #0: loss = 0.158722 (* 1 = 0.158722 loss)
    306 I1101 07:24:42.980649  2942 sgd_solver.cpp:138] Iteration 799, lr = 0.01
    307 I1101 07:25:38.133345  2942 solver.cpp:243] Iteration 800, loss = 0.193614
    308 I1101 07:25:38.133430  2942 solver.cpp:259]     Train net output #0: loss = 0.193614 (* 1 = 0.193614 loss)
    309 I1101 07:25:38.133437  2942 sgd_solver.cpp:138] Iteration 800, lr = 0.01
    310 I1101 07:26:33.691634  2942 solver.cpp:243] Iteration 801, loss = 0.16334
    311 I1101 07:26:33.691720  2942 solver.cpp:259]     Train net output #0: loss = 0.16334 (* 1 = 0.16334 loss)
    312 I1101 07:26:33.691726  2942 sgd_solver.cpp:138] Iteration 801, lr = 0.01
    313 I1101 07:27:28.735807  2942 solver.cpp:243] Iteration 802, loss = 0.135363
    314 I1101 07:27:28.735888  2942 solver.cpp:259]     Train net output #0: loss = 0.135363 (* 1 = 0.135363 loss)
    315 I1101 07:27:28.735895  2942 sgd_solver.cpp:138] Iteration 802, lr = 0.01
    316 I1101 07:28:23.747395  2942 solver.cpp:243] Iteration 803, loss = 0.201854
    317 I1101 07:28:23.747498  2942 solver.cpp:259]     Train net output #0: loss = 0.201854 (* 1 = 0.201854 loss)
    318 I1101 07:28:23.747516  2942 sgd_solver.cpp:138] Iteration 803, lr = 0.01
    319 I1101 07:29:18.985882  2942 solver.cpp:243] Iteration 804, loss = 0.152548
    320 I1101 07:29:18.985962  2942 solver.cpp:259]     Train net output #0: loss = 0.152548 (* 1 = 0.152548 loss)
    321 I1101 07:29:18.985970  2942 sgd_solver.cpp:138] Iteration 804, lr = 0.01
    322 I1101 07:30:14.139812  2942 solver.cpp:243] Iteration 805, loss = 0.173412
    323 I1101 07:30:14.139940  2942 solver.cpp:259]     Train net output #0: loss = 0.173412 (* 1 = 0.173412 loss)
    324 I1101 07:30:14.139960  2942 sgd_solver.cpp:138] Iteration 805, lr = 0.01
    325 I1101 07:31:09.495632  2942 solver.cpp:243] Iteration 806, loss = 0.185132
    326 I1101 07:31:09.495724  2942 solver.cpp:259]     Train net output #0: loss = 0.185132 (* 1 = 0.185132 loss)
    327 I1101 07:31:09.495733  2942 sgd_solver.cpp:138] Iteration 806, lr = 0.01
    328 I1101 07:32:04.826592  2942 solver.cpp:243] Iteration 807, loss = 0.172771
    329 I1101 07:32:04.826647  2942 solver.cpp:259]     Train net output #0: loss = 0.172771 (* 1 = 0.172771 loss)
    330 I1101 07:32:04.826653  2942 sgd_solver.cpp:138] Iteration 807, lr = 0.01
    331 I1101 07:33:00.284266  2942 solver.cpp:243] Iteration 808, loss = 0.177978
    332 I1101 07:33:00.284364  2942 solver.cpp:259]     Train net output #0: loss = 0.177978 (* 1 = 0.177978 loss)
    333 I1101 07:33:00.284373  2942 sgd_solver.cpp:138] Iteration 808, lr = 0.01
    334 I1101 07:33:55.824797  2942 solver.cpp:243] Iteration 809, loss = 0.130759
    335 I1101 07:33:55.824872  2942 solver.cpp:259]     Train net output #0: loss = 0.130759 (* 1 = 0.130759 loss)
    336 I1101 07:33:55.824880  2942 sgd_solver.cpp:138] Iteration 809, lr = 0.01
    337 I1101 07:34:50.941251  2942 solver.cpp:243] Iteration 810, loss = 0.257597
    338 I1101 07:34:50.941334  2942 solver.cpp:259]     Train net output #0: loss = 0.257597 (* 1 = 0.257597 loss)
    339 I1101 07:34:50.941356  2942 sgd_solver.cpp:138] Iteration 810, lr = 0.01
    340 I1101 07:35:45.703112  2942 solver.cpp:243] Iteration 811, loss = 0.2065
    341 I1101 07:35:45.703183  2942 solver.cpp:259]     Train net output #0: loss = 0.2065 (* 1 = 0.2065 loss)
    342 I1101 07:35:45.703191  2942 sgd_solver.cpp:138] Iteration 811, lr = 0.01
    343 I1101 07:36:40.185742  2942 solver.cpp:243] Iteration 812, loss = 0.197094
    344 I1101 07:36:40.185852  2942 solver.cpp:259]     Train net output #0: loss = 0.197094 (* 1 = 0.197094 loss)
    345 I1101 07:36:40.185858  2942 sgd_solver.cpp:138] Iteration 812, lr = 0.01
    346 I1101 07:37:35.029402  2942 solver.cpp:243] Iteration 813, loss = 0.122207
    347 I1101 07:37:35.029482  2942 solver.cpp:259]     Train net output #0: loss = 0.122207 (* 1 = 0.122207 loss)
    348 I1101 07:37:35.029489  2942 sgd_solver.cpp:138] Iteration 813, lr = 0.01
    349 I1101 07:38:30.204357  2942 solver.cpp:243] Iteration 814, loss = 0.162976
    350 I1101 07:38:30.204438  2942 solver.cpp:259]     Train net output #0: loss = 0.162976 (* 1 = 0.162976 loss)
    351 I1101 07:38:30.204445  2942 sgd_solver.cpp:138] Iteration 814, lr = 0.01
    352 I1101 07:39:25.489464  2942 solver.cpp:243] Iteration 815, loss = 0.218391
    353 I1101 07:39:25.489573  2942 solver.cpp:259]     Train net output #0: loss = 0.218391 (* 1 = 0.218391 loss)
    354 I1101 07:39:25.489581  2942 sgd_solver.cpp:138] Iteration 815, lr = 0.01
    355 I1101 07:40:21.197306  2942 solver.cpp:243] Iteration 816, loss = 0.152489
    356 I1101 07:40:21.197386  2942 solver.cpp:259]     Train net output #0: loss = 0.152489 (* 1 = 0.152489 loss)
    357 I1101 07:40:21.197393  2942 sgd_solver.cpp:138] Iteration 816, lr = 0.01
    358 I1101 07:41:16.851727  2942 solver.cpp:243] Iteration 817, loss = 0.211059
    359 I1101 07:41:16.851809  2942 solver.cpp:259]     Train net output #0: loss = 0.211059 (* 1 = 0.211059 loss)
    360 I1101 07:41:16.851817  2942 sgd_solver.cpp:138] Iteration 817, lr = 0.01
    361 I1101 07:42:12.292263  2942 solver.cpp:243] Iteration 818, loss = 0.172165
    362 I1101 07:42:12.292335  2942 solver.cpp:259]     Train net output #0: loss = 0.172165 (* 1 = 0.172165 loss)
    363 I1101 07:42:12.292342  2942 sgd_solver.cpp:138] Iteration 818, lr = 0.01
    364 I1101 07:43:07.584506  2942 solver.cpp:243] Iteration 819, loss = 0.217142
    365 I1101 07:43:07.584583  2942 solver.cpp:259]     Train net output #0: loss = 0.217142 (* 1 = 0.217142 loss)
    366 I1101 07:43:07.584590  2942 sgd_solver.cpp:138] Iteration 819, lr = 0.01
    367 I1101 07:44:02.289772  2942 solver.cpp:243] Iteration 820, loss = 0.223516
    368 I1101 07:44:02.289875  2942 solver.cpp:259]     Train net output #0: loss = 0.223516 (* 1 = 0.223516 loss)
    369 I1101 07:44:02.289881  2942 sgd_solver.cpp:138] Iteration 820, lr = 0.01
    370 I1101 07:44:56.864765  2942 solver.cpp:243] Iteration 821, loss = 0.201347
    371 I1101 07:44:56.864830  2942 solver.cpp:259]     Train net output #0: loss = 0.201347 (* 1 = 0.201347 loss)
    372 I1101 07:44:56.864837  2942 sgd_solver.cpp:138] Iteration 821, lr = 0.01
    373 I1101 07:45:51.757936  2942 solver.cpp:243] Iteration 822, loss = 0.137515
    374 I1101 07:45:51.758020  2942 solver.cpp:259]     Train net output #0: loss = 0.137515 (* 1 = 0.137515 loss)
    375 I1101 07:45:51.758028  2942 sgd_solver.cpp:138] Iteration 822, lr = 0.01
    376 I1101 07:46:46.580322  2942 solver.cpp:243] Iteration 823, loss = 0.194158
    377 I1101 07:46:46.580425  2942 solver.cpp:259]     Train net output #0: loss = 0.194158 (* 1 = 0.194158 loss)
    378 I1101 07:46:46.580443  2942 sgd_solver.cpp:138] Iteration 823, lr = 0.01
    379 I1101 07:47:41.901865  2942 solver.cpp:243] Iteration 824, loss = 0.201745
    380 I1101 07:47:41.901943  2942 solver.cpp:259]     Train net output #0: loss = 0.201745 (* 1 = 0.201745 loss)
    381 I1101 07:47:41.901962  2942 sgd_solver.cpp:138] Iteration 824, lr = 0.01
    382 I1101 07:48:38.301646  2942 solver.cpp:243] Iteration 825, loss = 0.193692
    383 I1101 07:48:38.301780  2942 solver.cpp:259]     Train net output #0: loss = 0.193692 (* 1 = 0.193692 loss)
    384 I1101 07:48:38.301789  2942 sgd_solver.cpp:138] Iteration 825, lr = 0.01
    385 ^C
    View Code

    看起来好像损失函数在震荡,也不是很懂,ctrl+C停下来,得到了一个caffenet_train_iter_827.caffemodel,在models/bvlc_reference_caffenet目录下,拿到python里测了一下自己的数据,也能分类,虽然很多分不对。

    分类程序是好久之前按网上的demo写的:

     1 if os.path.isfile(caffe_root + 'models/bvlc_reference_caffenet_stamp/caffenet_train_iter_827.caffemodel'):
     2     print 'CaffeNet found.'
     3 else:
     4     print 'Downloading pre-trained CaffeNet model...'
     5     
     6 model_def = caffe_root + 'models/bvlc_reference_caffenet_stamp/deploy.prototxt'
     7 model_weights = caffe_root + 'models/bvlc_reference_caffenet_stamp/caffenet_train_iter_827.caffemodel'
     8 
     9 net = caffe.Net(model_def,      # defines the structure of the model
    10                 model_weights,  # contains the trained weights
    11                 caffe.TEST)     # use test mode (e.g., don't perform dropout)
    12 
    13 mu = np.load(caffe_root + 'python/caffe/imagenet/ilsvrc_2012_mean.npy')
    14 mu = mu.mean(1).mean(1)  # average over pixels to obtain the mean (BGR) pixel values
    15 print 'mean-subtracted values:', zip('BGR', mu)
    16 
    17 # create transformer for the input called 'data'
    18 transformer = caffe.io.Transformer({'data': net.blobs['data'].data.shape})
    19 
    20 transformer.set_transpose('data', (2,0,1))  # move image channels to outermost dimension
    21 transformer.set_mean('data', mu)            # subtract the dataset-mean value in each channel
    22 transformer.set_raw_scale('data', 255)      # rescale from [0, 1] to [0, 255]
    23 transformer.set_channel_swap('data', (2,1,0))  # swap channels from RGB to BGR
    24 net.blobs['data'].reshape(50,        # batch size
    25                           3,         # 3-channel (BGR) images
    26                           227, 227)  # image size is 227x227
    27 
    28 image = caffe.io.load_image(caffe_root + 'examples/images/YP1000016.jpg')
    29 transformed_image = transformer.preprocess('data', image)
    30 plt.imshow(image)
    31 
    32 net.blobs['data'].data[...] = transformed_image
    33 
    34 ### perform classification
    35 output = net.forward()
    36 
    37 output_prob = output['prob'][0]  # the output probability vector for the first image in the batch
    38 
    39 print 'predicted class is:', output_prob.argmax()
    View Code

    train.txt参考各种,觉得这个博客比较良心,数据、怎么写shell也给了:http://blog.csdn.net/gaohuazhao/article/details/69568267

    另有:python写train.txt

    照葫芦画瓢,制作自己的训练数据:

    find ./ -name "*.jpg" > train.txt 可以把目录下所有的.jpg带目录加到train.txt里,怎么把目录名(标签)加在后边还不会,最后是拷出来在windows里用UltraEdit做的...

    find ./ -name "*.jpg" > 1.txt

    参考creat_list.sh

    paste -d' ' train.txt 1.txt >> 2.txt  可以把train.txt和1.txt拼在一起放在2.txt

    因为给的数据都是训练数据,按标签放在一个目录下,怎么随机拆分成val还不知道,参考知乎答案,打算先训一下看,就不测试了,于是直接选了其中一张图片,当做val(如果val.txt为空的话,训练的时候会报错,说val长度不合法)

  • 相关阅读:
    【问题】解决python3不支持mysqldb
    《Craking the Coding interview》python实现---02
    《Craking the Coding interview》python实现---01
    python标准库--functools.partial
    Multimodal Machine LearningA Survey and Taxonomy
    概率热图的绘制--gradcam
    Pytorch 技巧总结(持续更新)
    linux教训
    Destruction and Construction Learning for Fine-grained Image Recognition----论文
    Ubuntu16.04+3090+cuda11.0+cudnnV8+pytorch-nightly
  • 原文地址:https://www.cnblogs.com/zhengmeisong/p/7741510.html
Copyright © 2020-2023  润新知