The quiet AI layer beneath Artemis and modern space missions

The quiet AI layer beneath Artemis and modern space missions

Artemis and Russia’s space missions do not look alike on the surface. One is a highly public, multinational lunar architecture led by NASA. The other is a more fragmented mix of Russian lunar ambitions, orbital-station plans, and ISS operations shaped by a very different political and industrial setting. Yet both are being pulled in the same technical direction by the same hard truth: deep-space missions increasingly depend on artificial intelligence and autonomous software. As of April 2026, that is no longer theoretical. Artemis II lifted off on April 1, 2026, and Russia is simultaneously tying AI to crew support on the ISS and to its planned future orbital station.

Official crew portrait, clockwise from left Koch, Glover, Hansen and Wiseman
Official crew portrait, clockwise from left Koch, Glover, Hansen and Wiseman

That shared link with AI is often misunderstood. It is not mainly about flashy humanoid robots or a chatbot floating in zero gravity. The real connection is operational. At the Moon, distance, weak communications geometry, hostile terrain, limited crew time, and massive data flow all punish slow decision-making. The result is simple: spacecraft, rovers, stations, and mission software have to do more sensing, sorting, judging, and reacting on their own.

AI is moving into the space architecture

NASA is unusually explicit about this shift. Its own AI overview says artificial intelligence already supports missions across the agency, including lunar and Mars exploration, mission planning, and autonomous spacecraft operations. That framing matters because it shows AI is not being treated as a side experiment bolted onto Artemis. It is being written into the architecture of exploration itself.

You can see that in the Artemis stack almost everywhere you look. NASA’s VIPER rover work used AI algorithms to help select safer landing conditions, assess risk, optimize decisions, plan rover paths, help operators drive, and build more accurate maps of the mission area. The Lunar Terrain Vehicle planned for Artemis surface operations is described by NASA as having autonomous driving, along with advanced communications and navigation. That is a revealing combination. NASA is not designing lunar mobility as a glorified off-road car. It is designing a machine that must keep working when no astronaut is holding the wheel and no controller on Earth can solve every problem in real time.

The same pattern appears in lunar navigation and orbital infrastructure. NASA’s LunaNet concept is meant to give missions at the Moon the measurements needed for onboard orbit determination, guidance, and surface positioning without constant Earth-side processing. Gateway pushes the idea further. NASA says Gateway is designed to operate uncrewed through remote operations for up to three years, and that research between crewed visits will rely on autonomous systems and remote operations. HALO, the first habitation element, is also built around a software layer meant to enable autonomous station operations.

NASA is even widening the AI conversation beyond navigation and control. In January 2026, NASA opened a Moon-and-Mars research opportunity focused on foundation AI models for science and exploration applications. That does not mean a large model is about to pilot Orion. It does show that NASA sees the next phase clearly: AI will not only move hardware, it will increasingly help interpret terrain, science, logistics, and mission context at scale.

Autonomy is the real common denominator

The deepest common ground between Artemis and Russian missions is not branding, symbolism, or geopolitics. It is autonomy. When a spacecraft is descending toward rough polar terrain, when a rover is threading through uncertain ground, or when an orbital outpost sits uncrewed for long stretches, Earth-based supervision is not enough. AI enters because the mission environment forces a transfer of judgment from people to machines, even if only in narrow, tightly bounded ways.

ESA’s documentation on Luna-27, a mission developed with Roscosmos, describes this very clearly. Two minutes before touchdown, the lander’s Pilot computer is supposed to analyze the terrain in detail. Lidar and optical sensing feed the computer, which evaluates hazards such as boulders, slopes, and craters so the spacecraft can choose a safer landing approach. That is not a decorative use of software. It is machine perception and machine judgment inserted directly into the most unforgiving part of the mission.

Artemis and Russian missions in one glance

Mission layerArtemisRussian missions
Surface mobility and landingVIPER used AI for landing-site risk assessment, route planning, operator support, and map-building; NASA’s Lunar Terrain Vehicle is being developed with autonomous driving.Luna-27’s Pilot computer is designed to analyze terrain shortly before touchdown and use lidar-based sensing to avoid hazards and choose a better landing approach.
NavigationLunaNet is designed to support autonomous navigation at the Moon through onboard orbit determination, guidance, and positioning.Russian lunar landing work described through ESA also depends on onboard terrain evaluation rather than continuous manual intervention from Earth.
Orbital operationsGateway is designed to run uncrewed for long periods, with autonomous systems and remote operations supporting science and station functions between crew visits.Russia has said its future orbital station will use AI, and Roscosmos has outlined AI assistance for crew work on the ISS.

The table looks compact, but the pattern behind it is large. Both programs are moving away from the old model of “spacecraft as obedient hardware” and toward “space systems as supervised decision-makers.” The degree of autonomy differs, the engineering cultures differ, and the public messaging differs even more. The direction, though, is unmistakably shared.

Russia’s public AI record is thinner but more revealing

There is an important asymmetry here. NASA publishes far more technical detail about how AI and autonomy fit into Artemis. Russia’s public record is patchier and often less specific. That makes it tempting to overstate or understate what is happening. The honest reading sits in the middle. Russia is clearly pursuing AI in space operations, but the public documentation is more fragmented, more aspirational, and less richly detailed than the Artemis record.

Still, the signals are real. Reuters reported in June 2025 that Roscosmos planned to integrate Sber’s Gigachat model into ISS IT systems to help cosmonauts process satellite imagery, with the stated goal of improving effective image resolution and giving direct assistance to the crew. TASS has also reported that Russia’s planned orbital station is meant to use AI technologies, and Roscosmos leadership has framed that station as one that should operate autonomously to a significant extent, with robotics built into the concept. That is a very different use case from lunar hazard avoidance, yet it reveals the same logic. Russian space planning is also trying to move routine perception, filtering, and support work closer to the machine.

Russia’s space-junk monitoring plans point the same way. TASS reported that Russia intended to use AI elements in an automated warning system for hazardous situations in near-Earth space, with the goal of processing far more measurements and improving conjunction prediction. Even though that sits outside the Moon race, it belongs to the same technological family: automated classification, anomaly detection, and faster response in operationally dense environments.

The hardest part is trust in onboard judgment

Luna-25 supplies the harshest lesson in this entire discussion. Reuters reported that Roscosmos blamed the mission’s August 2023 failure on a malfunction in an onboard control unit that did not shut off the propulsion system when it should have. That does not show that “AI failed.” Public reporting does not support that claim, and it would be sloppy to make it. What it does show is more fundamental: once crucial decisions move into onboard control and automation, software reliability stops being a support issue and becomes the mission itself.

That is the hidden tension inside both Artemis and Russian plans. Everybody wants more autonomy because autonomy buys time, safety, reach, and endurance. Nobody gets that for free. Every extra layer of machine judgment creates a second demand just as severe: verification, fault tolerance, explainability, graceful degradation, and a clean handoff back to humans when the unexpected happens. NASA’s own Artemis autonomy work reflects that caution. It treats autonomy not as a single trick but as a system-wide capability that must analyze, reason, make decisions, and respond within a broader architecture.

That is also why the conversation around AI in space is often less glamorous than the headlines suggest. The crucial advances are not usually cinematic. They are software managers that keep a station stable while it is empty, navigation systems that keep a lander away from a boulder field, science tools that reduce hours of human path planning, and robotic caretakers that inspect, inventory, and respond to a leak before a crew ever arrives. NASA’s ISAAC work for Gateway-like operations makes that plain. It was built for autonomous caretaking, robotic inspection, inventory tracking, and response to problems such as leaks during uncrewed mission phases.

A lunar future that depends on software

Seen side by side, Artemis and Russia’s missions tell a bigger story about the next space age. The contest is no longer only about launch vehicles, flags, or even first footsteps. It is about which exploration systems can keep operating intelligently when people are absent, delayed, overloaded, or too far away to matter in the moment.

Artemis looks more mature on this front because NASA has turned AI and autonomy into a visible part of its lunar infrastructure, from navigation to rovers to Gateway operations. Russia’s case looks rougher and less transparent, yet the direction is similar: lunar landing autonomy, AI-supported crew tools, AI-enabled orbital-station ambitions, and automated orbital safety systems. The gap is not between “AI user” and “non-user.” The gap is between a program that has articulated the software layer in detail and a program that has signaled the same destination with less public clarity.

The most interesting thing Artemis and Russian missions have in common with AI, then, is not fashion. It is necessity. Once exploration moves from brief heroic visits to sustained operations on and around the Moon, the machine has to become more than hardware. It has to become a competent partner in perception, judgment, and survival. That is the quiet layer beneath the headlines, and it will matter long after the launch footage fades.

Beyond Artemis, the same AI logic is reshaping every major space program

That quiet AI layer does not end with Artemis, and it does not stop at Russia’s more uneven attempts to modernize its space operations. Once the frame widens, the same pressure appears almost everywhere that serious lunar and deep-space ambitions are taking shape. China, India, Japan, and Europe are all moving toward missions that can sense more, judge more, and recover more without waiting for constant instructions from Earth. What looked at first like a feature of one American program and one Russian response is better understood as a broader shift in spaceflight itself.

That shift is not mainly rhetorical. It sits inside landing software, terrain recognition, obstacle avoidance, onboard navigation, autonomous caretaking, and the growing expectation that spacecraft will remain useful even when crews are absent or communication is imperfect. The countries that matter in the next phase of lunar exploration are converging on the same operational truth: the farther missions go, the more intelligence has to move onboard. NASA says AI allows spacecraft to make decisions and keep working even when they are out of contact with Earth, and that principle now reaches far beyond Artemis alone.

China is building autonomy into lunar mission design

China is the clearest place to extend the argument, because its recent lunar missions show autonomy not as a slogan but as mission logic. During the Chang’e-6 landing on the far side of the Moon, an autonomous visual obstacle avoidance system was used to detect hazards, after which the spacecraft hovered and relied on a laser 3D scanner to select the final landing site before descending. That sequence matters. It shows software moving directly into the most unforgiving minutes of the mission, where hesitation and human delay are least tolerable.

The mission’s importance did not end with landing. Chang’e-6 became the first mission to return samples from the far side of the Moon, which is already enough to make it historic. But the deeper significance here is structural. A far-side mission depends on a communications architecture and control philosophy that accepts more onboard judgment from the start. China’s lunar program is steadily building confidence in machines that can operate through the awkward parts of exploration rather than merely survive them.

That philosophy also appears in the International Lunar Research Station concept pursued by China and Russia. CNSA described the ILRS as a scientific base with the capability of long-term autonomous operation on the lunar surface or in lunar orbit. This is where the continuity with the earlier Artemis discussion becomes especially strong. The larger powers are not simply racing to plant hardware near the Moon. They are designing systems meant to remain productive when humans are absent, delayed, or overwhelmed.

India turned careful onboard judgment into credibility

India fits naturally into the same storyline, though with a different tone. Chandrayaan-3 was not sold as a theatrical AI showcase. It was built to prove that India could land safely, operate credibly, and do so with a guidance stack robust enough for a region where mistakes are costly. ISRO’s own mission details list the ingredients with unusual clarity: laser and RF altimeters, a laser Doppler velocimeter, a lander position detection camera, a lander horizontal velocity camera, and a hazard detection and avoidance camera with its processing algorithm. That is the language of a program that treats software judgment as part of the vehicle itself.

When Chandrayaan-3 landed on August 23, 2023, India became the fourth country to achieve a soft lunar landing and the first to land near the Moon’s south polar region. That success carried symbolic weight, but it also changed India’s technical standing. A space agency earns real authority in this field when it proves it can manage descent, terrain uncertainty, and sensor fusion under lunar conditions, not when it merely announces ambition.

India’s case also helps clean up the language around AI in space. In public debate, the term often gets stretched until it means almost anything with software inside it. Chandrayaan-3 points to a more useful definition. The meaningful question is whether a mission can detect risk, estimate motion, choose a viable path, and complete critical phases with bounded independence from ground control. By that standard, India belongs squarely in the same global turn toward autonomy.

Japan and Europe are pushing precision rather than volume

Japan’s SLIM mission sharpens the story from another angle. JAXA used image-based navigation and autonomous guidance control during descent, then relied on image-based obstacle detection near the surface to avoid hazardous rocks and choose a safe landing zone. JAXA later reported positional accuracy on the order of only a few meters during key obstacle-detection phases and described the safe landing zone as one set autonomously by SLIM based on that detection. That is a decisive change in what lunar landing can mean. It is no longer just about reaching the Moon intact. It is about reaching the exact kind of place that used to be considered too risky.

Europe’s Hera mission is even more revealing because it carries the argument beyond the Moon. ESA says Hera will use autonomous navigation around the Didymos-Dimorphos asteroid system, and in March 2025 the agency reported that the spacecraft had autonomously locked onto dozens of impact craters and surface features during its Mars flyby as a full-scale test of its self-driving technique. That is not a decorative demo. It is a preview of the kind of spacecraft the next decade will increasingly require: vehicles able to orient, interpret, and maneuver around unfamiliar bodies with far less human micromanagement.

Japan and Europe do not need to mirror Artemis in scale to shape the field. A pinpoint landing architecture or a reliable autonomous navigation stack can influence everyone else’s design standards. Space power is starting to include something subtler than launch capacity alone. It includes the ability to make machines behave intelligently at the edge of contact.

The pattern looks different in Russia, but it is still the same pattern

Russia still belongs in this wider picture, though more cautiously than China or India. Publicly, the strongest recent AI signal has appeared in orbital operations rather than in a clearly articulated lunar autonomy program. Reuters reported in June 2025 that Roscosmos planned to integrate Sber’s GigaChat model into ISS IT systems to help cosmonauts process satellite imagery, including improving the effective image resolution available to the crew. That is a different use case from landing autonomy, but it belongs to the same family of problems. Routine interpretation, prioritization, and support work is being pushed closer to the machine.

At the same time, Russia’s lunar record reinforces the risk side of the autonomy story. Reuters reported that Roscosmos attributed Luna-25’s 2023 failure to a malfunction in an onboard control unit. That does not justify the lazy claim that “AI caused the crash.” The public record does not support that. What it does show is more fundamental. The moment more authority moves into onboard systems, software reliability and fault handling stop being supporting details and become mission survival.

That tension runs through the entire global picture. Every major space power wants more autonomy because autonomy buys time, range, resilience, and efficiency. None of them gets that gain for free. More onboard judgment demands stricter verification, better sensor fusion, cleaner recovery logic, and much higher trust in the machine’s behavior when conditions turn ugly. The global move toward AI in space is not a triumphal story about smarter software. It is a harder story about which agencies can trust their software enough to let it matter.

Where the global shift is happening fastest

Mission areaWhat autonomy is actually doing
Descent and landingDetecting hazards, matching terrain, estimating position and velocity, selecting safer touchdown zones, and preserving control when Earth cannot intervene quickly
Uncrewed operations and deep-space navigationMaintaining stations between crew visits, managing onboard tasks, tracking surface features, and navigating around distant targets with limited real-time human supervision

This is the practical overlap between Artemis, Chang’e-6, Chandrayaan-3, SLIM, Hera, and Russia’s newer orbital AI plans. The labels vary from agency to agency, but the burden placed on software is becoming remarkably similar.

The next space order will be written partly in software

Seen this way, the earlier Artemis-centered argument was only the opening section of a much larger story. The most consequential race in space is no longer only about who can launch more mass or announce the boldest roadmap. It is about who can build missions that remain composed, useful, and safe when people are too far away to help in the moment. Artemis shows that in lunar infrastructure. China shows it in far-side mission execution. India shows it in disciplined landing systems. Japan shows it in precision. Europe shows it in self-driving deep-space navigation. Russia, even through a more uneven public record, shows that the same pressure is reaching orbital operations and future station design.

That is where the whole line of argument finally settles. AI in space is not the headline because it sounds futuristic. It is becoming central because sustained exploration now depends on machines that can perceive, interpret, and act with discipline when nobody on Earth can close the loop fast enough. The quiet layer beneath Artemis turns out not to be uniquely American, and not uniquely Russian either. It is becoming the hidden operating system of modern space power.

Artificial intelligence in the service of space missions

Seen from that wider angle, artificial intelligence is no longer a futuristic add-on to spaceflight. It is becoming one of the working conditions of exploration itself. Not because agencies want a fashionable label, but because modern missions increasingly depend on systems that can interpret terrain, filter noise, prioritize decisions, protect hardware, and preserve mission value when human response is too slow or too far away. The deeper spaceflight moves into sustained lunar operations, autonomous navigation, robotic surface work, and long uncrewed phases, the less credible the old model of constant human supervision begins to look.

That is where the phrase artificial intelligence in the service of space missions becomes more precise than it first appears. The role of AI is not to replace astronauts, flight controllers, or mission designers. It is to extend their reach into environments where presence is limited, time is expensive, and hesitation can destroy years of work in seconds. A good autonomous system does not make exploration less human. It makes human ambition more durable. It gives crews better decisions, gives spacecraft better resilience, and gives missions a better chance of surviving reality rather than merely following plan.

The agencies that will shape the next era of space exploration may therefore be judged by something quieter than launch spectacle. They will be judged by the reliability of their software, the discipline of their autonomy, and the trustworthiness of the machine judgment they are willing to fly. In that sense, the future of space power will not be written only in rockets, budgets, and flags. It will also be written in guidance logic, perception systems, onboard reasoning, and the invisible architecture that allows a mission to remain calm when the environment is not.

And that may be the clearest way to understand the decade ahead. The most valuable intelligence in space will not be the loudest or the most theatrical. It will be the kind that keeps a lander steady, a rover useful, a station alive, and a mission intact when nobody on Earth can intervene in time. That is the real software frontier now taking shape above the atmosphere — and it is becoming one of the defining forces behind the next generation of space missions.

Author:
Jan Bielik
CEO & Founder of Webiano Digital & Marketing Agency

The quiet AI layer beneath Artemis and modern space missions
The quiet AI layer beneath Artemis and modern space missions

This article is an original analysis supported by the sources cited below

Artemis II Launch
NASA mission page confirming Artemis II lifted off on April 1, 2026.
https://www.nasa.gov/gallery/artemis-ii-launch/

Artificial Intelligence
NASA overview of agency-wide AI use in missions, exploration, planning, and autonomous systems.
https://www.nasa.gov/artificial-intelligence/

Part 1 Artificial Intelligence and NASA’s First Robotic Lunar Rover
NASA article on VIPER’s use of AI for risk assessment and decision support.
https://www.nasa.gov/blogs/missions/2023/12/01/part-1-artificial-intelligence-and-nasas-first-robotic-lunar-rover/

Part 2 Artificial Intelligence and NASA’s First Robotic Lunar Rover
NASA article on VIPER’s AI-assisted path planning, operator support, and mapping.
https://www.nasa.gov/blogs/missions/2023/12/14/part-2-artificial-intelligence-and-nasas-first-robotic-lunar-rover/

LunaNet Empowering Artemis with Communications and Navigation Interoperability
NASA explanation of LunaNet and autonomous navigation services for lunar missions.
https://www.nasa.gov/centers-and-facilities/goddard/lunanet-empowering-artemis-with-communications-and-navigation-interoperability/

Lunar Terrain Vehicle
NASA page describing the Artemis lunar rover concept, including autonomous driving.
https://www.nasa.gov/suits-and-rovers/lunar-terrain-vehicle/

Gateway Capabilities
NASA overview of Gateway’s uncrewed operation, autonomous systems, and remote operations.
https://www.nasa.gov/gateway-capabilities/

Gateway A Deep Space Home and So Much More
NASA article on HALO and the software architecture enabling autonomous station operations.
https://www.nasa.gov/missions/artemis/gateway-halo-a-deep-space-home/

Integrated System for Autonomous and Adaptive Caretaking
NASA description of ISAAC, a project for autonomous spacecraft caretaking during uncrewed phases.
https://www.nasa.gov/integrated-system-for-autonomous-and-adaptive-caretaking-isaac/

Amendment 37 New Opportunity C.12 Foundational Artificial Intelligence for the Moon and Mars
NASA Science notice on foundation-model research for Moon and Mars science and exploration applications.
https://science.nasa.gov/researchers/solicitations/roses-2025/amendment-37-new-opportunity-c-12-foundational-artificial-intelligence-for-the-moon-and-mars/

Luna
ESA page describing Luna-27 hazard avoidance, lidar sensing, and terrain analysis before touchdown.
https://www.esa.int/Science_Exploration/Human_and_Robotic_Exploration/Exploration/Luna

Russia pinpoints cause of moon shot failure, looks to bring next missions forward
Reuters report on Roscosmos attributing Luna-25’s failure to an onboard control unit malfunction.
https://www.reuters.com/world/europe/russia-says-moon-shot-failed-due-control-unit-malfunction-2023-10-03/

Russia plans to integrate homegrown AI model into space station
Reuters report on Roscosmos plans to use Gigachat on the ISS for crew support and image processing.
https://www.reuters.com/business/finance/russia-plans-integrate-homegrown-ai-model-into-space-station-2025-06-03/

AI to help create new Russian Orbital Station — chief designer
TASS report on AI technologies in plans for the future Russian orbital station.
https://tass.com/science/1811131

Russia’s future orbital station to use artificial intelligence — Roscosmos chief
TASS report on Roscosmos describing an autonomous, AI-equipped future orbital station.
https://tass.com/science/1333311

Russia to launch first satellite to monitor space junk in 2027
TASS report on AI elements in Russia’s planned hazardous-space-situation warning system.
https://tass.com/science/1161437

Liftoff! NASA Launches Astronauts on Historic Artemis Moon Mission
NASA release confirming the Artemis II mission launched on April 1, 2026.
https://www.nasa.gov/news-release/liftoff-nasa-launches-astronauts-on-historic-artemis-moon-mission/

Artificial Intelligence
NASA overview explaining how AI supports autonomous decision-making in spacecraft and exploration systems.
https://www.nasa.gov/artificial-intelligence/

Lunar Terrain Vehicle
NASA page describing the Artemis lunar rover concept, including autonomous driving.
https://www.nasa.gov/suits-and-rovers/lunar-terrain-vehicle/

Gateway Capabilities
NASA overview of Gateway’s long uncrewed phases and reliance on autonomous systems and remote operations.
https://www.nasa.gov/gateway-capabilities/

Gateway Frequently Asked Questions
NASA FAQ describing Gateway science operations enabled by autonomous systems and remote operations.
https://www.nasa.gov/gateway-frequently-asked-questions/

Integrated System for Autonomous and Adaptive Caretaking
NASA page on ISAAC, a project for remote and autonomous caretaking of Gateway during periods without crew onboard.
https://www.nasa.gov/integrated-system-for-autonomous-and-adaptive-caretaking-isaac/

Xinhua Headlines: China’s Chang’e-6 lands on moon’s far side to collect samples
Report describing autonomous visual obstacle avoidance and laser-based landing-site selection during Chang’e-6 descent.
https://english.news.cn/20240602/65a540dde3f042b98be10a12f8d18b7c/c.html

China’s Chang’e-6 brings back first samples from moon’s far side to Earth
Report on the historic return of the first far-side lunar samples.
https://english.news.cn/20240625/b32da60752504a5184d5a8b5b467dbce/c.html

China’s spacecraft takes off from moon with first samples from far side
Report on Chang’e-6 ascent and far-side sample return operations.
https://english.news.cn/20240604/1c1c4096916c4cbab93d5ff891bb622e/c.html

China and Russia sign a Memorandum of Understanding Regarding Cooperation for the Construction of the International Lunar Research Station
CNSA statement describing the ILRS as a scientific base capable of long-term autonomous operation.
https://www.cnsa.gov.cn/english/n6465652/n6465653/c6811380/content.html

Chandrayaan-3 Details
ISRO mission page listing the lander’s hazard detection, navigation, velocity, and positioning systems.
https://www.isro.gov.in/Chandrayaan3_Details.html

Government of India Department of Space Parliament Session Document
Official Indian government document confirming Chandrayaan-3’s successful soft landing near the lunar south polar region.
https://www.isro.gov.in/media_isro/pdf/docs/UsefulLinks/ParliamentQuestions/Parliament_Monsoon_Session_26092024_eng.pdf

Outcome for the Smart Lander for Investigating Moon (SLIM)’s Moon Landing
JAXA release describing SLIM’s image-based navigation, autonomous obstacle detection, and pinpoint landing performance.
https://global.jaxa.jp/press/2024/01/20240125-1_e.html

Hera asteroid mission tested self-driving technique at Mars
ESA report on Hera’s autonomous tracking of surface features during its Mars flyby as a test of self-driving navigation.
https://www.esa.int/Space_Safety/Hera/Hera_asteroid_mission_tested_self-driving_technique_at_Mars

Hera
ESA mission overview describing Hera’s autonomous navigation around its asteroid targets.
https://www.esa.int/Space_Safety/Hera

Russia plans to integrate homegrown AI model into space station
Reuters report on Roscosmos plans to deploy GigaChat in ISS IT systems for crew assistance and image processing.
https://www.reuters.com/business/finance/russia-plans-integrate-homegrown-ai-model-into-space-station-2025-06-03/

Russia pinpoints cause of moon shot failure, looks to bring next missions forward
Reuters report on Roscosmos attributing Luna-25’s crash to an onboard control-unit malfunction.
https://www.reuters.com/world/europe/russia-says-moon-shot-failed-due-control-unit-malfunction-2023-10-03/

Cover photo: NASA/John Kraus
Photo: Official crew portrait, clockwise from left Koch, Glover, Hansen and Wiseman By Josh Valcarcelflickr.com, Public Domain, Link