1. Capabilities of general-purpose AI
    1. Generative capability (text, images, audio, video, etc...)
    2. Reasoning capability and other enhancements
    3. Hallucinations are still a problem
      1. When doesn't know something it "create"
    4. Continuos improvement
      1. best results in GPT 4 than 3.5
      2. More than 200 plugins in june 23
      3. Code interpeter
        1. doing math
        2. data analysis
        3. Visualization
        4. interactive graphing
        5. image editing
        6. Image analysis
      4. Memory GPT
        1. expands input capacity
        2. expands long-term memory
      5. autoGPT
        1. automates steps needed to complete complex tasks
        2. AI does the rest
      6. Learning agents
    5. Hype
      1. Understanding the situation is critical
        1. About general purpose AI integration
        2. Must understand capabilities and dangers
      2. Help or hurt
        1. help companies
        2. Warnings dangers drive public pressure for protective measures
        3. Regulations policy react no proact
      3. Why fear/danger lens?
        1. great risk at play
        2. our tech has exceeded the limits
        3. AI has turned on the afterburners
        4. We need swift action
  2. How LLM works
    1. Procedural Algorithms vs. neural networks
      1. Procedural
        1. Transparent
        2. Decision tree
      2. Neural Network
        1. Black-Box
    2. How LLMs build understanding
      1. Elements
        1. Layers
        2. Embeddings
        3. Clustering
        4. Position
        5. Attention
      2. Reinforcement Learning with human feedback (RLHF) Matters
        1. More nuanced responses
        2. Better alignment with human values
        3. Adaptation to new scenarios
        4. Error correction
  3. Where thing are going wrong
    1. Current and new short-term harms
      1. Hallucinations
      2. Deepfakes
      3. simulations
      4. social hacking to AI
    2. Net good vs. net bad: the order matters
      1. incredible potential, but also great risk
        1. Cyber-offense
        2. Deception
        3. Persuasion & manipulation
        4. Political strategy
        5. weapons adquisition
        6. long-horizon planning
        7. AI development
        8. -Situational awareness
        9. self-profiferation
      2. Order matters: we can't break society along the way
      3. Our culture are built on language, and we've given out the keys to manipulation
      4. Open source: why it's different this time
      5. Dangers of unregulates, decentralized AI
      6. AI demands novel solutions to bind powers and responsability
    3. Ways things can go horribly wrong in the future
      1. Multi-polar traps
        1. Situation in which everyone engages in harmful behavior not because they want to, but because they will lose otherwise
        2. escaping
          1. Collaboration and communication
          2. Shared norms and values
          3. Incentives for cooperation
          4. Monitoring and enforcement
          5. Transparent decision-making processes
          6. Adaptative governance
          7. Long-Term perspective
          8. External intervention
  4. AI & the economy
    1. Incredible acceleration of the system
      1. What ideas are being accelerated?
        1. Prevailing socioeconomic systems
          1. Growth is good
          2. one can own land
          3. nature is a stock of resources to be converted to human purposes
          4. people are perfectly rational economic actors
      2. The broken paradigm of today's extractive tech
        1. Give users what they want
        2. thecnology is neutral
        3. We've always had moral panics
        4. Maximize personalization
        5. Who are we to choose?
        6. Grow al all costs
        7. Obsess over metrics
        8. Capture attention
      3. Existencial Risk = (competition X Extraction) ^ Technology
    2. Addressing misaligned financial incentives
      1. Price is always at the center
      2. Thought leadership (set unified agenda; create and disseminate strategic language)
        1. External pressure (drive a cultural awakening)
          1. Media
          2. Documentaries
          3. Books
          4. TV appearances
          5. Podcast
          6. Conferences
          7. Policy / Law
          8. Humane tech policy principles
          9. Advise global leaders & policymakers
          10. Litigatin shareholders Actions
          11. Education
          12. Families & Schools
          13. Toolkits & resources
        2. Aspirational pressure (drive a shift toward humane technology)
          1. Product & Culture change
          2. Support and advise to tech companies
          3. Course (training on building humane technology)
          4. Workshops (general or topical)
          5. Mobilization
          6. Connect tech experts with social impact & policy
          7. Buid aligned community
          8. Community solutions library
          9. Toolkits & resources
    3. Aligning our institutions with our tech
      1. Democracies and collaborative problem-solving rely on two key faculties
        1. sensemaking: how we make sense of the world and reality
        2. Choicemaking: how we make wise choices
      2. there are a "wisdom gap" created by runaway technology between:
        1. Complexity of the issues
          1. Misinformation
          2. Cyber-attacks
          3. Nuclear escalation
          4. GPT4 & synthetic media
          5. Global financial risk
          6. Extremism
          7. AI arms race
          8. Planetary boundaries
          9. Synthetic biology
          10. ...
        2. Ability to make sense of the complexity
      3. Alternatives to GDP
        1. Genuine Progress Indicator, folds in big externalities like:;
          1. crime
          2. ozone depletion
          3. lost leisure time
          4. ...
        2. Bhutan's Gross National Happiness
        3. Thriving Places Index
        4. Happy Planetary Index
        5. Human Development Index
        6. Green Domestic Product
        7. Better life Index
      4. Key takeaways
        1. Existencial Risk = (Competition X Extraction)^Technology
        2. A price-centered system needs interventions that connect tightly to price
        3. AI demands that we upgrade democratic functioning and institutions to keep up with our innovations
  5. Deepening disparity
    1. Leaving behind people with less resources and people who are not the common case
    2. Massive job losses likely (more for higher education levels) + much more wealth inequality
      1. Short-term: huge productivity gains for people using Al
      2. Soon after that: lots of layoffs
      3. More impact on cognitively intensive jobs (correlated to higher education levels)
      4. Big psychological/status hit for that group
      5. Less immediate impact on physically intensive jobs, but robots are increasingly capable
      6. Additional labor competition =lower wages for those who are left
      7. Cost of producing many goods &services likely to drop greatly
      8. "Al can partly help you with your job" will translate to lost jobs:
        1. "There's a huge cost premium on work that has to be split across two people - there's the communication overhead, there's the miscommunication, there's everything else, and ifyou can make one person twice as productive you don't do as much as two people could do-maybeyou do as much as three and a halforfourpeople could do. - Sam Altman, OpenAl CEO
      9. Bachelor's 71% exposure and Master's or higher 63%
    3. AI reinforces past patterns and can over-tailor individual risk assessments
      1. Based on data
        1. Health coverage?
        2. Home loan?
        3. Disability insurance?
      2. ...
    4. AI strengthens societal "defaults" and stereotypes
      1. AI-Amplified Societal Conditioning Happens With:
        1. Gender
        2. Age
        3. Sexual Orientation
        4. Marital Status
        5. Race
        6. Parenting Roles
        7. Religion
        8. Criminal Record
        9. Economic Status
        10. Disability
        11. Nationality
        12. Mental health Stigma
  6. Paths forward
    1. A catastrophic mix of conditions
      1. Frenetic Innovation
        1. ~10x/year is happening
        2. Intense competition, societal protections lagging
        3. Societal integration without understanding risk
      2. Synthetic Media
        1. Easy creation of stunning synthetic media
        2. Very hard to tell real vs. synthetic
        3. Social media promotes engaging synthetic content
      3. Distributed Access
        1. Global access complicates regulation
        2. Commodity hardware runs new models
        3. Countless options for malicious actors
      4. Rising inequality
        1. Massive job losses likely
        2. capitalism prioritizes those with capital
        3. Risk of disenfranchisement and civil unrest
    2. Transcendent innovation, requires transcendent clarity
      1. Why are we really doing all this innovation in the first place?
      2. Is our definition of "AI Alignment" sufficient?
      3. How well do you understand the conditions shaping you and your product?
    3. Thriving and Centering values modules at https://humantech.com/course
    4. Centering what's important: 10 core capabilities
      1. Nussbaum and Sen proposed a list of 10 core capabilities that societies should seek to foster a minimal threshold of, including:
        1. life, health and bodily integrity
        2. thinking, feeling and emotion
        3. affiliation
        4. play
        5. control over one's environment
    5. AI & Kids
      1. Relationships are the entry point
        1. Social media, phones, media, games
        2. Companies will supercharge their attempts to build these relationships with much more persuasive AI
        3. AI will magnify most of the known harms across all these domains and create huge $ opportunities
      2. Opportunities
        1. Mental healts support
          1. more accessible but
          2. huge risk AI chatbots on social media driven perverse incentives
        2. Surgeon General Vivel Murthy
          1. Social media safe?
          2. evidence that SM harm young's peopole mental health
        3. Education
          1. More interactive education
          2. Broader access
      3. Resilience
        1. Dangerous for kids accusstomed to get only what they want
          1. Ai's look great
          2. sound great
          3. support them
          4. ...
        2. instant gratification at an all-time high means resilience is at an all-time low
      4. ...
    6. Reccaping solutions
      1. Within the existing socioeconomic system
        1. interventions have to affect price
        2. Cultural awakening, legislation, litigation, insider pressure, inspiration,design support
        3. Blogal coordination needed
        4. Safety investments commensurate to growth
      2. Getting AI aligned with society insede3 the voerning economic system
        1. Including haeavy input from social scientces + wisdom trditions
        2. Always centeringt human rights / growing disparities
        3. Centering values in an explicit way
      3. Transitioning to new systems
    7. How can help
      1. Learn more about AI
        1. Watch this talk again
        2. • Take our course: Foundations of Humane Technology
        3. Check out this ML safety course: https://course.mlsafety.org/
        4. • Check out AI tech is human's reading list
      2. Speak up
        1. demonstrate nuanced thinking and balance
        2. weigh in on forums and discussions
        3. speak up where you have agency