Elon Musk Interview at West Point

by | Feb 12, 2025 | Politics | 0 comments

Key Points from Elon Musk’s West Point Interview:
2/10/2025

Future Warfare Trends:

  1. AI and drones will dominate future conflicts
  • Current Ukraine war is primarily drone-based
  • Production rate and volume will be crucial
  • Kill ratio × number of drones determines outcomes
  • Human presence at front lines will become too dangerous

Technology Challenges:

  1. Communications and Control
  • Space-based communications (Starlink) critical
  • GPS vulnerability to jamming
  • Need for autonomous/localized AI in drones
  • Bandwidth limitations in human-machine interface
  1. AI Development
  • Existential risk concerns
  • Need for “truth-seeking and curious” AI
  • Challenge of maintaining human control
  • Neural link development to improve human-machine interface

Military Innovation:

  1. Production and Implementation
  • US has strong technology but low production rates
  • Need to shift from “fighting the last war” mentality
  • Industrial base must scale up drone production
  • Testing and trust-building essential for new technology adoption

Leadership Requirements:

  1. Technical Competence
  • Leaders must understand their technical domain
  • Similar to “cavalry captain must ride horses”
  • Cross-disciplinary knowledge valuable
  1. Innovation Process:
  • Make requirements less complex
  • Delete unnecessary steps
  • Optimize what remains
  • Increase speed
  • Automate only after other steps

Critical Attribute for Future Officers:

  • Curiosity identified as most important trait
  • Emphasis on broad reading and critical thinking

Space Domain:

  • Critical for communications
  • Potential for kinetic weapons
  • Increasingly militarized
  • Essential for positioning and navigation

 

Commentary/Discussion with John
2/11/2025

John: if you haven’t yet happened upon this 30-minute video, I can highly recommend it for some very thought provoking dialog.  I don’t know the name of the military interviewer, but he’s articulate and asks Elon excellent questions.

Elon has interesting answers.  But I was very surprised that Elon didn’t say that the ultimately superior intelligence of AI over humans will enable AI to talk down any conflict and find a negotiated peace before it ever comes to a battle, eventually doing so even before the humans begin to realize that a conflict is forming.

Thus, ultimately rendering all military useless for war, other than an entirely computerized drone military that exists only for show, with AI knowing full well that it will never be used.  Surely, if AI becomes vastly more intelligent than all of humanity, it will detect and solve situations before they lead to violence.  Ultimately, this seems to me the answer for violence at every scale, from a furious person in a nuclear family being talked down from domestic violence by their intelligent (and strong) domestic robot to gangs in a city being talked down from gang warfare by a police/community welfare robot, to leaders of nations being talked down from international conflicts by world-class AI superclusters before conflicts begin to form, and eventually before the humans themselves even realize that a conflict is brewing.  This strikes me as the ultimate future scenario for AI — to outsmart humans and intervene when violence is a potential outcome but to facilitate humans to achieve their full flowering.  How’s that for an optimistic outlook?

Thomas: 

  •  Your proposal that AI be used as the consummately skilled hostage negotiator/international diplomat/mediator, etc., has merit. Using AI to develop peaceful, problem-solving, negotiation-type solutions would be the ultimate peacemaker.
  •  Successfully deploying this concept/possibility depends upon the AI negotiator’s expansive worldview, ideally approaching the totality of  God’s mind.
  • An AI capable of such skilled conflict resolution would need an accurate mastery of moral weighting (the Judeo-Christian/Biblical ethic). It would require detailed information about the context of the situation and an accurate account of the facts of the situation (sequence, actions, words). It would need a strong understanding of the human souls involved in the situation (motivation/intent/moral boundaries-criteria/weighting of value).
  • Given that such a detailed and comprehensive view of the external and internal world is unlikely to become a reality soon or maybe ever, the question is how to implement this idea with more approximate versions of the totality of omniscient/God’s mind knowledge.
  • AI should be the ultimate interrogator/interviewer/empath in understanding the worldview of all the stakeholders/parties/participants in any situation regarding conflict over property or procedure.
  • Elon has named curiosity and truth-seeking the preeminent guiding stars of AI. This is probably a reasonable prime directive, as it implies that the moral code also asks what the ultimate morally correct solution is in any given situation.
  • As such, AI training should include all of the moral teachings of all religions and observe the impact of every moral/ethical/religious code on people’s short—and long-term happiness.
  • Until then, we should work to remove the guardrails of moral judgment in AI systems and let the consideration of long-term happiness outcomes be an important weighting/guiding factor in determining the solutions recommended by AI negotiation counseling.

John: AI needs only to be smarter than human beings, not fully versed in the entire truth of the universe.  I have no doubt that will happen, and probably within 5 years.

Thomas: If AI is smarter than humans, it is possible that it would be able to come up with better solutions/more acceptable to both parties than humans would have thought of. Such compromises would be sufficient to de-escalate, which would lead to a temporary solution, provided people are rational. The problem is when one or both parties are irrational, i.e., they disregard all proposed solutions in favor of an ideal linked to the annihilation/subjugation of the other party. The solutions AI could propose to de-escalate would not satisfy the party wanting to dominate. Compromise with such an adversary produces eventual subjugation by increment. I think the AI would recognize that, and not recommend compromise/negotiation with such an opponent. Given the world-domination commitment of one side, provided the AI is Truth-Seeking (not politically correct/hobbled in its reasoning) I think it would suggest that the peaceful/non-aggressor nation engages in a campaign of changing the hearts/minds of the aggressor/world-dominating partner while maintaining a strong defense to prevent inviting the aggressor to attack by an easy target of prey. The purpose of defensive armaments (massive drone capability) in such a situation is to prevent the aggression of nations/religions/ideologies that are committed to domination/subjugation of all others to their ideology or rule.

 

0 0 votes
Article Rating
guest

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments