The "Inverse Ising Problem" refers to finding the parameters (the Jij's and the hi's) in an Ising model given the first and second moments (the magnetizations mi and the correlation functions cij).
This is of considerable interest in machine learning and data analysis whenever the data set and the number of variables is large, but the values taken by the variables can be taken to be "high" and "low". The maximum entropy distributions with given first and second moments then has the Ising form where the hi's and Jij's are Lagrange parameters.
Several perturbative methods to solve the inverse Ising problem approximately have been proposed in the last few years. I will give a survey of the situation, with a focus on what we have know about the applicability of these methods to data such as gene expression and recordings from many neurons, where an underlying exact description is surely not of the Ising form.
This is work with John Hertz and Yasser Roudi (Frontiers Comp Neuroscience, 2009) and work in progress.