I am writing a general equilibrium model using MPSGE with a state-level SAM from IMPLAN. If my understanding is correct, setting model.iterlim = 0 is used to check the ability of the model to replicate the benchmark data from the SAM. Doing this for my model yields level values of 1 for all production and prices, values for consumption which match those from the SAM, and all marginal values near 0 (between 1e-3 and 1e-6). This tells me that the calibration is appropriate (correct?). However, when I allow the model to iterate (ie, model.iterlim = 1000) without setting any counterfactuals, the production levels and (oddly) only the production levels deviate significantly from 1 with some dropping to 0.
Is my understanding of benchmark replication correct? Does it make sense to run counterfactual scenarios with what is going on in my model and, if so, it is valid to compare them to the 0-iteration results, or the 0>-iteration results?
I have only recently started learning MPSGE and have read every user guide for it I could find, but have been unable to find an answer to this question. I would greatly appreciate any insights that you can provide.
P.S. the objective value using 0 iterations is 4.330001E-4, and a residual of 6.153707e-04.
Hi
You might check if your model is homogenous: Double the numeraire and set the starting values accordingly and see if you have new infeasibilities.
An infeasibility of 1E-3 might be too big. Did you scale your values of the SAM? If you have a very rigid structure (lots of Leontief elasticities), you might not be able to solve the model as the convergence criterium is around 1E-7.
Please check your scaling and if that is OK, check if the model solves with a more flexible production and demand structure. You could also make your SAM fit better by getting rid of the 1E-3 by using a least-square procedure to get the row and column sums to match more precisely.
Cheers
Renger
Thank you for the response. I tried doubling the numeraire and starting values and got a residual of 1.194122e+05. I don’t understand how the model could not be homogeneous.
The values in my SAM are scaled to millions of dollars with 6 decimals. I balanced it with a python script to 1E-11 precision in the row sum/column sum differences. I also just ran a least squares balancing script (http://www.mpsge.org/tza/tzabal.htm) which makes some changes, but results in the same differences, so I don’t see how balance could be the issue. Also, the model is finding an optimal solution with normal completion.
As far as production, I do have some nests that are Leontief, but changing them to Cobb-Douglas does not change the solution to the model.
When scaling the SAM, does the number of decimal places make a difference? I’m very much at a loss.
Hi
Scaling to Billions would be better (in Switzerland the GDP is around 700 Billion, if I run the model in millions, the biggest value in the model would be 10E4. If you run a model in a million dollars, that means the solver will find a solution that is precise to 0.1 dollars (1E-7 * 1E6). If you compare that with your GDP 0.1 /GDP you see how precise you want to have your solution as a percentage.
The infeasibility in the homogeneity test is probably due to not setting the proper values for the incomes.
You could send your model and I would take a look at it during the weekend. (you can send it to me privately if you don’t want to have your research open to everybody).
Cheers
Renger