• 5 Posts
  • 266 Comments
Joined 2 years ago
cake
Cake day: June 26th, 2023

help-circle




  • uranibaba@lemmy.worldtomemes@lemmy.worldThe Future Is Now
    link
    fedilink
    arrow-up
    6
    arrow-down
    2
    ·
    5 days ago

    Here is an AI summary in bullet point form: /s

    • Title: The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers
    • The study surveyed 319 knowledge workers to explore how Generative AI (GenAI) impacts their critical thinking practices and the perceived effort involved in these tasks.
    • Knowledge workers primarily engage in critical thinking when using GenAI tools to enhance work quality, avoid negative outcomes, and develop skills.
    • Key barriers to critical thinking include lack of awareness, time pressure, limited motivation, and challenges in improving AI responses in unfamiliar domains.
    • Higher confidence in GenAI reduces the perceived effort required for critical thinking tasks, while confidence in their own skills tends to increase perceived effort, especially during evaluation and application of AI outputs.
    • Participants reported enacting critical thinking in about 60% of the examples shared, often involving goal and query formation, response inspection, and integration of AI outputs.
    • Trust in GenAI can lead to over-reliance, diminishing independent problem-solving and critical engagement, particularly in routine tasks.
    • The study identified motivators for critical thinking, including the desire for quality work and skill improvement, alongside inhibitors like perceived task importance and job scope.
    • Knowledge workers often conflated reduced effort with reduced critical thinking when satisfied with AI-generated responses, indicating a potential risk of complacency.
    • The findings suggest that GenAI tools should be designed to support critical thinking by addressing awareness, motivation, and ability barriers among users.
    • The implications highlight the need for feedback mechanisms in GenAI tools to help users calibrate their trust and confidence, ensuring a balanced relationship between AI assistance and independent critical thinking.

  • I do not agree with the idea that humans are being trained to act like robots. Any company with a customer service department is likely tracking the root causes of their customers’ issues. With enough data, they can identify the most common problems and their solutions. If the goal is to resolve a customer’s issue as quickly as possible (which seems like a reasonable assumption), it makes sense to guide the customer through the most common solutions first, as that will likely solve the problem.

    If someone works in customer service and repeats the same script daily, it’s understandable that they may come across as robotic due to sheer boredom. A skilled customer service representative can recognize when to use the script and when to deviate. However, if a company fails to hire the right people and does not offer a fair salary, those best suited for the role are unlikely to take the job.



  • It appears this was a Victim impact statement.

    A victim impact statement is a written or oral statement made as part of the judicial legal process, which allows crime victims the opportunity to speak during the sentencing of the convicted person or at subsequent parole hearings.

    From the article (emphasizes mine):

    But the use of AI for a victim impact statement appears novel, according to Maura Grossman, a professor at the University of Waterloo who has studied the applications of AI in criminal and civil cases. She added, that she did not see any major legal or ethical issues in Pelkey’s case.

    "Because this is in front of a judge, not a jury, and because the video wasn’t submitted as evidence per se, its impact is more limited," she told NPR via email.