Main Server Room, SLS Sybil Ludington
Neptune Mothball Yards, Sol System, 2760-09-14
Even in the 24th Century, WarShips didn't need to have their server rooms constantly staffed. High availability server clusters, fault-tolerant storage systems and even "self-healing" servers had been a thing as far back as the 1970s, something Admiral Noriko Murakami, one of the premier computer scientists in the Inner Sphere, was deeply familiar with. The technology might evolve over the centuries, but some fundamental principles remained unchanged.
That said, this was a WarShip, and WarShips expected to take damage, and require damage control. It was little surprise, then, that small control rooms, like the one Noriko was presently in, were present in each of the Sybil Ludington's primary and secondary server rooms, or that even a 2372-spec Aegis had enough distributed computing power outside these rooms to function in the event both were destroyed by enemy fire.
What Nirasaki Computers Collective had done over the past few weeks was to take the concept of distributed computing, and turn it up to eleven. Nirasaki engineers had assisted Neptune Yards crews in removing old hardware, in order to replace them with smaller, more efficient server systems, with vastly improved performance compared to the originals, whether measured in performance per watt, per cubic meter or per kilogram.
Even better, whether racked in the server rooms, or distributed throughout the hull, each micro-server "blade" was identical, with identical internal storage, processing power, layout, power connectors, and fittings. Only one kind of spare, then, would be needed for any particular system, no matter where it was. What the servers lacked in specialization, they made up in quantity. To say there were thousands of such micro-servers would not be understating things.
The architect behind this design, Jonathan Finch, sat in the small room with Noriko, cautiously monitoring the readouts of the console in front of him. The head of the Nirasaki team aboard the Sybil Ludington, Finch radiated a quiet intensity, his every action precise and deliberate. What he'd led his team to complete over the past few weeks was astonishing. Without even looking up from the console in front of him, Finch said, "We have 100% readings on all systems, and are ready to switch from local control and SDS in monitoring mode to full control mode whenever you're ready. All we need now is to activate the central SDS AI. Are you certain that not using the Dvarahal-based M-5 AI baseline and starting over is what you want to do, Admiral?"
Noriko sighed. "I'm certain, Mister Finch. Admiral Dvarahal had many fine points, was a gifted strategist and tactician, but also had a mindset that tended towards agressiveness. That's fine for an M-5, which run in pods like Terran orcas, and agressively defend their territory against intruders. They're not ones for subtletly, they're not expected to escort civilian ships for weeks, months or even years at a time, and most importantly, they're not meant to work daily with a human crew. That's not something an M-5's AI would be good at."
"Who do you intend to use as the basis of the neural mapping for the new AI, then?" Finch asked, concern edging into his voice.
"If I could, I’d use no one at all," she replied evenly. "I’d intended to use a purpose-built core personality built from the ground-up for the role."
Finch frowned. "Our colleagues at Ulsop Robotics tried that route," he noted dryly. "It didn't exactly go well for them. For that matter, it didn’t truly go that well for the Black Wasp or Voidseeker programs, but the semi-disposable nature of drone aerospace fighters makes their failures acceptable."
“That’s true,” Noriko agreed. “But, I’d like to think we’ve learned a thing or two since then, though, Mister Finch. And that’s one of the reasons why I don’t want the Sybil Ludington’s AI to be based on the neural mappings of a human being. As we’ve seen with the M-4s, and their tendencies toward what could be best described as depression and moroseness, and the notable aggression of the M-5s when they’re off the leash, as well as the way they both go absolutely berserk when they’re active during a jump, the biggest problem with our SDS AIs is the human factor. They’re too much like us. They don’t just share our self-awareness, our rules, our code of conduct, our standards of behavior, Mister Finch. They share our flaws, as well. We need to do better.”
“Better, Admiral? There is so much inherent risk in that statement.”
“Don’t I know it,” Noriko said, chuckling mirthlessly. “Are you familiar with the speculative and science fiction of the 20th Century, Mister Finch. Specifically with some of the depictions of artificial intelligence?”
Finch nodded. “I have some passing familiarity with the topic. Most of it fell deeply into the category that artificial general intelligence was exceedingly dangerous, and liable to wipe out humanity, given the opportunity.”
“But not all of it.”
“No, not all of it,” Finch conceded. “There was a smaller subset of fiction that speculated super-intelligent AIs would save us all, simply by the benefit of being smarter than us.”
“And your own thoughts on the matter, Mister Finch?” Noriko asked curiously.
“I believe that artificial intelligence can be both a great benefit and a terrible threat to humanity, and that it’s up to us to use this technology responsibly, to ensure they neither control us or destroy us.”
Noriko nodded, silent for several seconds. “Have you ever read the works of Laumer? They’re a bit more militaristic in theme, but still relevant to the topic.”
“Yes, but they always seemed a bit facile to me,” Finch noted.
“Perhaps that’s true,” Noriko replied, “but there’s something to be said for the concept. An artificial intelligence that, while it doesn’t see the universe the same way a human does, shares our values, and is dedicated to the concept of shielding and protecting humanity.”
“Admiral,” Finch said patiently, “those are admirable goals, but you’re still trying to instill humanity in the machine. There’s a simpler, more straightforward method to do so, and I think we both know what it is. It’s what's installed in those servers you first brought aboard, and it’s currently working, perfectly fine, aboard several M-11 Da Vinci series yard ships, even as we carry on this fascinating conversation.”
“That’s my backup plan,” Noriko admitted.
“Make it your primary plan. It will work. It does work. Refine it, improve it, teach it better, yes, but it’s a functioning starting point, Admiral.”
“Do you know the source of the base engrams for the M-11 AI, Mister Finch?” Noriko asked quietly.
“I do,” Finch admitted. “I also know the Draconis Combine is trying it your way, as well, and trying to develop an artificial intelligence that isn’t burdened by human developmental defects. They aren’t making much progress, despite the absolute brilliance of our acquaintance at Chitose Industries.”
“So that’s what Aoki is up to?” Noriko asked.
“That is my understanding, Admiral. Of course, as brilliant as Doctor Aoki is, he’s probably at least twenty years away from a true breakthrough. It’s possible you might be able to solve the issue before then, Admiral, but I think it’s entirely likely we won’t have that long.”
“I’m not certain what you mean,” Noriko said dubiously.
“The Star League Defense Force does not have a monopoly on predictive artificial intelligence, Admiral. The Inner Sphere is heading towards a potential catastrophe that will shake the very pillars of civilization, and the M-11 Project makes it clear that the Star League, at the very least, is terrified of that possibility. I’m terrified of that possibility, Admiral. And mutual acquaintances make it clear that you’re terrified of that possibility as well. Don’t let the fact that the M-11 AI borrowed some of your memory engrams stop you from using it on the Sybil Ludington.”