Columbia Student Dismissed for Cheating with AI Creates Tool to Aid Academic Dishonesty Through AI

Columbia Student Dismissed for Cheating with AI Creates Tool to Aid Academic Dishonesty Through AI


Title: Cluely AI: The Controversial Rise of a Real-Time Cheating Tool in the Age of Artificial Intelligence

In the swiftly changing landscape of artificial intelligence, novel tools regularly surface that question conventional standards and alter our engagement with technology. One such tool, Cluely AI, has recently gained widespread attention—not for its breakthroughs in productivity or creativity, but for its disputed application: cheating.

Created by two university students, Chungin “Roy” Lee and Neel Shanmugam, Cluely is an AI-driven application aimed at supporting users during live conversations, such as job interviews or academic evaluations carried out over video conferencing platforms like Zoom. The tool has ignited extensive discussion regarding the ethical ramifications of AI in professional and educational environments.

The Origins of Cluely

Cluely originated as a project named Interview Coder, a tool crafted by Lee and Shanmugam to aid job applicants in preparing for and succeeding in technical interviews. The AI was capable of hearing interview questions in real time and offering suggested responses, effectively serving as a digital whisperer during critical dialogues.

The tool drew attention when Roy Lee employed it during a job interview with Amazon. After landing the position, Lee openly shared his experience on social media, which quickly became a sensation. However, the celebration was short-lived. Columbia University, where Lee studied, expelled him for breaching academic integrity regulations. The incident underscored the rising tension between AI advancements and institutional ethics.

What Is Cluely?

Cluely is an AI assistant that functions during live video calls. Available for Mac users, the app operates in an in-browser window that remains unnoticed by others on the call. It listens to the conversation, transcribes the audio into written text, and relays it to a cloud-based AI engine. The AI then produces real-time responses or suggestions, which the user can utilize to answer questions persuasively and accurately.

While initially intended for job interviews, Cluely’s applications go far beyond that. It can be utilized in any situation involving a video call, including academic exams, business meetings, and even informal chats where users aspire to seem more knowledgeable or eloquent.

The Funding and Future of Cluely

Despite its controversial nature, Cluely has garnered considerable investor interest. The startup recently secured $5.3 million in funding, indicating that there exists a market for AI tools that enhance human performance in real-time interactions—even if they obscure ethical boundaries.

The funding will likely be allocated to broaden Cluely’s functionalities, refine its AI engine, and perhaps create versions for additional operating systems and video platforms. The team behind Cluely envisions a future where AI can seamlessly enhance human communication, empowering users to feel more assured and competent in high-pressure situations.

Ethical Concerns and Institutional Backlash

The emergence of Cluely has revived discussions surrounding the ethical application of AI. Detractors contend that tools like Cluely erode the integrity of job interviews, academic evaluations, and other assessment processes. By offering immediate answers, Cluely enables users to showcase a misleading representation of their knowledge and abilities.

Educational institutions and employers are especially apprehensive. Many schools have already prohibited generative AI tools like ChatGPT over concerns of academic dishonesty. Cluely escalates those worries by providing a tool specifically designed to aid users in real-time, complicating detection efforts.

Conversely, proponents argue that Cluely is merely a more sophisticated form of support—similar to employing a calculator in a math test or utilizing a teleprompter during a speech. They assert that as AI becomes more ingrained in daily life, the interpretation of cheating will need to adapt.

The Bigger Picture

While Cluely is presently identified as a “cheating AI,” its core technology has wider implications. Real-time AI support could transform sectors such as customer service, language translation, and accessibility for individuals with speech or cognitive challenges. The same technology that facilitates cheating could also empower individuals to communicate more efficiently and confidently.

Conclusion

Cluely AI embodies both the potential and risks of artificial intelligence in contemporary society. It demonstrates how AI can enhance human abilities in unprecedented manners, but also raises significant ethical and institutional concerns. As tools like Cluely grow more advanced and prevalent, society will need to contend with how to reconcile innovation with integrity.

Whether Cluely becomes a cautionary example or a groundbreaking resource will hinge on how developers, users, and regulators choose to influence its trajectory. One thing is certain: in the era of AI, the distinction between assistance and dishonesty is increasingly becoming ambiguous.