Here, P_max and Q_max are very big numbers. So, actually, cons4 and cons5 are not binding in my model at the optimal point. Moreover, cons1 is kind of representing cons 4 and cons5 since if y(n1,n2) is zero, then I(n1,n2) is zero according to cons2. Then, according to cons 1, P and Q are zero.
So, in theory, removing cons4 and cons5 will not change the solutions. But, after I removed these two constraints. I got different and obvious non-optimal solutions by CPLEX.

How do you know that the solution is not optimal? Constraint 4 and 5 could have an impact on the choice of the variables.
(It is hard to check without the Gams code).
Cheers
Renger

I found out that if I fixed the decision variables as the values obtained without removing those constraints, I got a better and same objective value as the one without removing those constraints. So, GAMS\CPLEX did not give the optimal solution if I remove those constraints. Please find the attached code. In the code, the following are those constraints I removed.

If I use couenne or bonmin (both are free and global solvers), I get the “correct” answer.
I thought that CPLEX would return the global optimal solution (I am using 12.7.1), but perhaps we are overlooking an option. Time doesn’t seem to play a role. If I solve the full model and then the restricted with iteration limit set to zero, CPLEX uses the solution of the previous full solve and says it is optimal.

I am no expert in MIPQP problems, perhaps somebody of GAMS can give a more sensible answer (I send an email asking them to look at this question).

I guess the LP relaxations Cplex builds during B&C are numerically unstable (or just plain incorrect) when using outer approximation (miqcpstrat=2, see https://www.gams.com/latest/docs/S_CPLEX.html#CPLEXmiqcpstrat). With QCP relaxations (miqcpstrat=1) Cplex returns the correct solution. The numercis of the initial model do not look too bad (to me). So I will send this to Cplex for further investigation. The miqcpstrat=1 should work as a nice workaround.

We got some answer on this. So it turns out to be a Cplex bug which can be easily worked around by setting good bounds to the free variables. Here is the original reply:

I found the primary source of trouble, which probably occurred on this model but not previous ones because the free variables in this model pretty clearly cannot take on large values, but CPLEX’s presolve was unable to deduce reasonable finite bounds. With the infinite bounds, CPLEX created a numerically problematic node QCP, and that led to the wrong answer. The fix for that issue was straightforward and will go into CPLEX’ next version. With default settings, I now get consistently correct answers with miqcpstrat=2 for 20 random seeds. However, with scaling set to 1, I still got a couple of declarations of suboptimal solutions being optimal with 20 seeds. This traced back to some relatively elaborate parameter settings for the node QCP (which we have to solve with miqcpstrat = 2 when branching gives us an integer feasible solution for the outer approximation) that just barely had too many primal infeasibilities. This triggered logic to use tighter barrier tolerances to try to get rid of the infeasibilities, but that then let to barrier just barely failing to converge. We can’t just change these tolerances based on one model, so we will not try to address this in CPLEX’ next version, for which we are about to freeze the code. However, we will keep the work item open for the next development cycle, so hopefully we will come up with something.

Meanwhile, in terms of workarounds for 12.8, as I said above, it looks to me like you could put some very modest bounds like [-100,100] on all the free variables without changing the meaning of the model. I’ll give this a closer look, as I’m curious as to why presolve didn’t pick this up. That would provide a robust workaround. And once CPLEX’ next version comes out, you could not only do that, but if that was not an option, you could run with a slightly larger feasibility tolerance of 1e-5, and you should get consistent clean runs with scaling set to 1. Or leave the feasibility tolerance at its default and run with numerical emphasis enabled; that too would work.