Discussion in 'Modding' started by Ftoomsh, Sep 19, 2017.

  1. Ftoomsh

    Ftoomsh Well-Known Member

    Has anyone done any modding work with the C3 AI?

    The reason I ask is that I am wondering if anything can be done to improve the AI in a mod? The C3 AI itself is not very good as I think we know. Even level Impossible is no real challenge. Then, of course, we notice that the C3 AI is not a "mod responsive AI". It is set for C3 vanilla so it cannot re-calculate and re-analyze when parameters are changed. Of course, it would be rather unrealistic to expect this from a basic AI.

    I have a few questions.

    (1) Is it possible to tell the current AI to make new decisions about which units to make?

    (2) Is it possible to program opening builds into the AI so it follows a build order for say (example), 10 pt, 5,000s resources, European nation?

    (3 Does level Impossible in C3 Vanilla cheat to get extra resources or cheap upgrades?

    (4) Why does the C3 Vanilla AI not make proper formations?
    Why does it seem to make formations that a human player cannot make?

    I have a hundred more questions but that will do for starters. :)
  2. Hansol333

    Hansol333 Active Member

    Hi one of the main reasons I stopped with modding was the terrible AI. I managed to do a lot of things

    1) open look for
    [*] = ; _ai_TryUnit(plind, cid, pikemanUnit, bar_count); and replace it with
    [*] = ; _ai_TryUnit(plind, cid, gc_ai_unit_musk17, bar_count);

    now the AI will train 17c musketeers then 17c pikeman (which are much more dangerous then pike, at least if controlled by AI)

    2) same file look for
    [*] = ; if (gMap.settings.gen.resourcestart=ai_st_res_millions) then
    [*] = ; _ai_TryUpgrade(plind, cid, gc_ai_upg_builders);

    3) as far I know no but you can add bonus for the AI, open the file player.script and go to gc_upg_type_effectfood : begin then add
    gc_upg_type_effectfood : begin
    gPlayer[plInd].resefficiency[cid][gc_resource_type_food] := gPlayer[plInd].resefficiency[cid][gc_resource_type_food]+round(value);

    if (gPlayer[plInd].bAI) then begin
    gPlayer[plInd].resefficiency[cid][gc_resource_type_food] := gPlayer[plInd].resefficiency[cid][gc_resource_type_food]+round(value);


    now the AI gets double the food gathering bonus, you can do the same with for example damage, that every damage upgrade also increases training speed by 5% or so.

    However I tried to make the AI much better, but its very hard work, for example the build order is GARBAGE; if you start a million res game they build 2 town halls, a mill and blacksmith and then as much 18c barracks as possible before even ONE 17c barracks, you raze them to the ground before they even finished the first one.

    However the main reason I stopped was because the AI is so insanely devensive. They come to you with some units and then they retreat. If I get them a lot of boni (especially training time boni) the AI will at some build overrun me but the beginning is so boring. If someone has managed to change that (that they always attack and never retreat) I might continue
  3. Ftoomsh

    Ftoomsh Well-Known Member

    Thanks for replying. My questions might have sounded like I wanted to make a cheating AI. Actually, I don't. I wanted to know if the standard AI cheated.

    I agree with you. I don't want to make the AI cheat, I want to make it better by proper techniques so that it plays an interesting and challenging style. However, I don't have much knowledge of how AI works in RTS. I do know how chess AI works. RTS is much more complex than chess of course but I believe there could be a way to use similar techniques in RTS but apply them differently as required.

    Chess AI has just three basic modules (if we ignore the input/output module). These are;

    1. Legal Move Generator
    2. Tree Search Algorithm
    3. Position Evaluation Function.

    The Legal Move Generator generates all legal moves for any given position. The Tree Search Algorithm (nested loops obviously) searches all possible continuations to a given depth, let us say 8-ply which would be 4 moves by each player. At the end of each search to an end position, the Position Evaluation function is applied and gives the possible position a score. When it has scored all possible end positions it picks the move that would lead to the best possible score for the computer AI.

    Of course, chess is a relatively simple array game (for a powerful computer). It is an array of 8x8 and it has an average branching factor (average number of legal moves in legal positions) of 36. Thus to search to "n" ply depth means evaluating 36 to the power "n" positions on average.

    The immediate thought is that RTS is too complex to search for game moves because it is a vast array compared to a chess board. However, if we break RTS down into components we can find ways to apply search algorithms to RTS. Heuristics (general rules) will be needed too but lets look at search possibilities first. We must use the divide and conquer approach. The first problem is economic production. The second problem is military production. The third problem is conflict (military combat: tactics and strategies).

    The first problems, economic production and military production, should initially be solved on their own without consideration of the military problem of the game. Let us start with economic production. Imagine an AI that does not need an "opening book", does not need a set build order for each peacetime, for each resource start, nor even for different mods with different costs. Instead, imagine an AI which can make intelligent build order decisions for any start and for any mod. How could it do this?

    The answer lies in doing a tree search of possible legal moves and in being able to evaluate possible results. Some legal moves are constrained by build dependencies. This prunes the search tree somewhat for us. But where there are choices, the algorithm will analyze all choices for possible results. Take a C3 vanilla, 10 pt, 5,000s build as an example. There are many possibilities of build orders to the 10 minute mark. How would the algorithm choose? It certainly could not search 10 minutes ahead for all possibilities. Indeed, it will search "buildings and units ahead" not "time ahead".

    The key would be in the position evaluation function. Idle resources in stock have some value. Thus 5,000 of each resource has a value. Furthermore, each resource is of a different value. The way to measure
    resource value in C3 would be by labor time. How many peasant minutes does it take to gather say 1,000 of each resource. When we have the peasant labor value we can equate resources to each other and reduce them to a common currency (if we wish) which would be gold value. The most valuable commodity is treated as commodity money.

    However, I do not want to bog down in that part of theory here and now. The important thing is this. How does the algorithm "decide" to build a building? The answer is that it has to value and seek encapsulated value more than free value. If you have 600 wood and 600 stone in stock then this has a certain value. Let us simplify here and assume for the exercise that they both represent the same amount of peasant labor to gather. Thus we can simplify for the example and equate them. The AI has 1,200 "points" of value in store. If the AI leaves the resources idle (always a choice) then these contribute 1,200 points to the total position score. However, if the AI were to encapsulate these 1,200 "points" of value into a Town Hall the AI would be better off. We know that from practical play. How do we encourage the AI to build by decision rather than by rote build list? We could employ the simple expedient of doubling free value when it is incorporated into a building. Therefore, the AI can now make a decision. It can (a) leave the resources in store as free value and build nothing for an end score of 1,200 or it can (b) it can incorporate the resources into a building and double their value to 2,400. The AI routine can by simple logic now "decide" to build the possible building as it is programmed to always optimise its economic score. This is an economic score we are talking about here.

    We can note that this sort of programming encourages the AI to do something we know from practical play is the best way to play. It is best to not have idle resources but rather to put all stored and gathered resources to work as soon as possible. That is the way to build a big economy. And then we know a big economy is the way to build a big military. In addition, the AI can be helped to choose to turn on producing buildings by giving a bonus say 10% to a producing building. So a producing town hall for 2,620 points is valued over an idle town hall for 2,400 points and thus production is selected. In turn, each new peasant is worth double (in points) the food it took to produce him.

    As I noted above, building dependencies actually help the AI to make decisions. Here game design helps the AI. At each point in the build, there are only limited options for the next build. This narrows the economic search tree and thus reduces the computations necessary. The market will introduce complexities which will take careful thought in the design of the AI. The AI will need to be able to forward calculate (via the tree search algorithm and position evaluation function) that it could increase its score by selling resources and building more buildings. This is intrinsic to this design so if the design is executed correctly it will do this. However, can it look far enough ahead (in tree searches) to detect that an early market (a cheap building after all) will help to much enhance its score later? This is possibly doubtful. In which case, a special heuristic, a special general rule may have to be incorporated to force an early market build and permit rapid sales to get the best market prices. Though in the latter case, the programming to goal-seek incorporation of free resources into buildings (and units) will ensure this occurs in any case.

    This is the first step. First the "economic growth problem" has to be solved by the AI. To run such an AI project this divide and conquer approach would have to be taken. Later on for war AI, the solutions will likely come from calculating a geometry type solution where armies are concentrated to create military "centers of gravity" and where "sufficient force" is always dispatched to counter flank and economy threats (raids). So, the war AI will not look ahead like a chess AI but will look ahead in a different way where military centers of gravity, travel paths and interception and defense points are calculated in a geometric fashion and topographical fashion. In other word, the solutions could likely come from the field of network-centric warfare theory.
    Last edited: Sep 20, 2017
  4. Ftoomsh

    Ftoomsh Well-Known Member

    Currently, I am looking at the collection of files which includes files like , and I must admit I don't understand much about them yet.

    Where does the AI decide which building to build next? And how does it decide? Does it just build from a set list or does it do some calculations to decide which building it would be best to build next?

    My thinking is that a true economic AI would be mod-independent and parameter independent. That is to say, it would make different build order decisions based on the parameters of the version or mod in question. I have a few ideas on how this could be done but it will take me a long time to develop and test my ideas.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice