help-glpk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Help-glpk] Model Too Large


From: Andrew Makhorin
Subject: Re: [Help-glpk] Model Too Large
Date: Wed, 4 Apr 2007 13:18:27 +0400

> I have been using glpsol successfully to solve some game theory
> problems.  The models I make optimize a player's strategy mix against
> the maximal choices by the opponent.
> 
> I tried to extend this approach to a larger problem ("3-card push or
> fold lowball").  The core of the model is this constraint:
> 
> # Define the maximums from B's perspective
> # What this says is: B will have a pure strategy that is best against
> # our mixed strategy, for each hand 'k' he draws.
> s.t. MaxEV {(ka,kb,kc) in Hands, l in PlayerBStrategies} :
>      m[ka,kb,kc] >= sum{ (ia,ib,ic) in Hands, j in PlayerAStrategies }
>           ( p[ia,ib,ic,j] * weighted_ev_for_b[ia,ib,ic,ka,kb,kc,j,l] );
> 
> There are 455 elements in Hands, 257 PlayerAStrategies, and 5
> PlayerBStrategies.
> 
> Unfortunately the matrix this generates is fairly dense (I think).  A
> dense matrix representation should take about 2.4 GB.  But loading
> this model into glpsol uses more than 100GB of memory--- it did not
> finish loading before exhausting all the swap I had allocated on my
> (64-bit) system.

Using the MathProg language leads to high overhead expenses to store
the constraint matrix and other model components, unfortunately.
Besides, the language implementation is quite a far from the ideal.

> 
> Is there any way I can reduce the memory usage?  Any way I can rewrite
> the above constraint to be more tractable?  (Other than just reducing
> the size of the problem.)
> 
> If not, can anybody recommend an alternate solver better suited to
> dense matrices?
> 
> Is there some way I can get the LP problem output incrementally?  I
> tried using glpsol's option --wfreemps, but ran into the same memory
> exhaustion.  How feasible is it to reuse the MathProg model parser,
> but write a C/C++ backend that spits out just a portion of the problem
> at a time, rather than keeping it all in memory?

I would suggest either to generate the model using glpk api routines
directly or write a program to generate the model, say, in cplex format.
The latter way seems to me easier and more attractive.


Andrew Makhorin





reply via email to

[Prev in Thread] Current Thread [Next in Thread]