Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
wiki:aca2017:assign3 [2018/03/23 13:57] – [Sample Inputs/Networks] Andreas Moshovoswiki:aca2017:assign3 [2018/03/23 13:58] (current) – [Sample Inputs/Networks] Andreas Moshovos
Line 85: Line 85:
 Here are two python scripts that can read the above files: Here are two python scripts that can read the above files:
 {{ :wiki:aca2017:scripts.zip |}} {{ :wiki:aca2017:scripts.zip |}}
 +
 +Here's further info from Milos:
 +
 +The values are all stored in 16 bit int (if you load it in numpy, you will get an int). If you look at them in binary the first n+1 are the integer part, while the rest are the fraction bits, where n is precision I included in lists. For lenet layer one, input activations will be 2.14 and weights are 1.15.
 +
 + 
 +
 +The npy files contain python dictionaries with the values, layer names are the keys. After loading the files into variable var, each layer parameters are accessed as var[‘layer name’], this will be a 4D array. I never used it, but there is a git repo with code to load npy in c/c++ (https://github.com/rogersce/cnpy).
 +'
 +
 +
 +The numbers should be interpreted as fixed point values with the following format. The layers are in the same order as prototxt and the diagram and include only convolution and inner product layers.
 +
 + 
 +
 +Lenet layers:
 +
 +conv1  conv2  ip1       ip2
 +
 +Lenet activations:
 +
 +2.14     4.12     4.12     4.12     8.8
 +
 +Lenet weights:
 +
 +1.15     1.15     1.15     1.15
 +
 + 
 +
 +Nin layers:
 +
 +conv1  cccp1   cccp2   conv2  cccp3   cccp4   conv3  cccp5   cccp6   conv4-1024     cccp7-1024      cccp8-1024
 +
 +Nin activations:
 +
 +11.5     11.5     10.6     13.3     13.3     12.4     12.4     12.4     11.5     11.5     11.5     10.6     8.8
 +
 +Nin weights:
 +
 +1.15     2.14     2.14     1.15     1.15     1.15     1.15     1.15     1.15     1.15     1.15     1.15
 +
 + 
 +
 +Activations show the format of the input into the layer. The bias and output of the layer should follow the format of the following layer. Weights follow the format of the current layer.