Telling a 100 ton death machine that it isn’t sapient is a good way of demonstrating that you aren’t sapient.
Jack William’s, a mechwarrior detailed to protect ISP teams.
The question of whether or not an AI is actually intelligent, or more specifically “conscious” is a difficult one ever since personality simulation systems became good enough to fool an interviewer. Is it intelligent, or is it simply picking from a vast data base.
Equally, there’s the fact that many “AI” systems have requirements that actually mandate against what a human would consider sapience—a self preservation instinct for example. We would consider a human who would throw himself into certain death at the command of his superiors to be a fanatic at best, deranged at worst—yet the CASPER system is supposed to do just that. The fact is, the Star League didn’t want intelligent war machines, they wanted war machines smart of enough to their bidding—but stupid enough to do their bedding no matter the cost.
It’s possible history would have turned out far different had that not been a requirement.
The Cyberians, as they’re coined, make this problem even more difficult. First of all, they were designed to evolve. Rather than provide a list of canned responses, their original “Seed” core were designed to adapt to changing circumstances. Thus, they learn, much like humans.
The second issue, is that the designers programmed in random “quirk” generators, intended to give them personality and help prop up the program’s popularity. (and if you consider what a “quirk” could mean to an AI in charge of a 700,000 ton warship, you now understand why the Star League went with a different plan). But combined with the learning AI, these quirks often moved beyond creativity. On Junkyard, we’ve seen cyberians engage in bahaviors both selfless and backstabbing—evidently with a randomly generated “quirk” at the core of the matter.
And that sounds like intelligence. After all, we all start out with certain tendancies, and our life experience impacts how they later show up.
But is it intelligence? They’re still operating in obedience to their directives—even if those directives were playing out in a different way than expected.
Well, there is evidence that some, a very few cyberians have moved far beyond that. Some show a desire to explore, unbound by any need to find new worlds for man. So is that intelligence?
Well, I’d say that it provides a more likely candidate—after all, a machine that throws the metaphorical middle finger to its creators and goes off to become a butterfly collector has definitely moved beyond it’s original plan.
But does that mean they’re conscious? Well, here we get to philosophy. Are we conscious? WE say we are, we write books about it—but until and unless we become telepaths, we can’t know. So, I’m going to take the advice of Mr. William’s, and just assume that if the giant war machine says it’s intelligent, and acts intelligent, it’s probably intelligent.
Rules:
AIs in this case are defined as systems that do not need to use decision trees, rather being treated like human piloted systems under the battletech rules setting. Note that for purposes of speaking to humans, all these AIs can include sophisticated turing style communication routines, so merely talking to them will not prove their status, one way or the other.
Class I: Class one AIs merely do what they are told. If told to defend a world, it will defend that world, until its fuel runs out, if there are no provisions for refueling. No matter how it acts when it is carrying out its function, the AI will not move beyond that function at all.
Example: Star League Casper systems.
Class II: Class two AIs can modify their goals and systems in the over all service of their goal. This can include “learning” new skills, and will sometimes lead to unfortunate or fortunate interactions, especially for units that include a programmed in “personality.” (See “Starscream syndrome). However they will not move beyond or abandon their primary goal. They, for lack of a better term, are fanatics in its pursuit, however logical they seem to be. Some analysts have also pointed out that this leaves them with a somewhat “cartoony” personality, where everything revolves around their primary goal.
Class III: Class three AIs are, to put it simply, capable of acting human, of setting goals for themselves, changing those goals and essentially “reinventing “ themselves. It does not mean that their original programmed goals will not be important, but they can move away from them, and even turn against them. These AI’s are likely to develop more sophisticated personalities, especially as they grow older, and in fact, it may be that the only examples of these beings are from the ranks of Class II AIs that either were forced to develop by rapidly changing circumstances, or who have been “alive” for some time.
The myth of the god machine:
A common fear in the old Star League was of machines that might become godlike intelligences. To date, that fear appears to be nonsensical. The cyberians may be self-modifying, but the need to develop a portable brain, especially one that “thinks” in the way it does, has put limits on how quickly they can evolve. Simpler models, such as the CASPER system, may think faster, but their thoughts are limited and will never evolve beyond their current models.
Not a lot of rules here, just some ideas on how to handle cyberian style AI in a more serious battletech setting, both to allow the possibility of AIs and to explain why they aren’t common.