Unsupervised Machine learning with Boltzmann machines is the inverse problem of finding a suitable Gibbs measure to approximate an unknown probability distribution from a training set consisting of a large amount of samples. The minimum size of the training set necessary for a good estimation depends on both the properties of the data and of the machine. We investigate this problem in a controlled environment where a Teacher Restricted Boltzmann machine (T-RBM) is used to generate the dataset and another Student machine (S-RBM) is trained with it. We consider different classes of unit priors and weight regularizers and we analyze both the informed and mismatched cases, viewed as the amount of information the Student receives about the Teacher model. We describe the results in terms of phase transitions in the Student posterior distribution, interpreted as a statistical mechanics system. In the analysis we give special attention to the Hopfield model scenario, where the problem is expressed in terms of phase diagrams, describing the zoology of the possible working regimes of the entire environment. In this present case it is possible to observe the differences between memorization and learning approach. When data become large and confused the learning methodology overcomes memorization.