Artificial Intelligence Archives | Íű±ŹĂĆ Today https://news-prod.syr.edu/topic/artificial-intelligence/ Fri, 10 Apr 2026 15:29:45 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 /wp-content/uploads/2025/08/cropped-apple-touch-icon-120x120.png Artificial Intelligence Archives | Íű±ŹĂĆ Today https://news-prod.syr.edu/topic/artificial-intelligence/ 32 32 4 Ways Jeff Rubin Is Thinking About AI Right Now /2026/04/10/4-ways-jeff-rubin-is-thinking-about-ai-right-now/ Fri, 10 Apr 2026 15:29:44 +0000 /?p=336078 The University’s chief digital officer shares insights on the job market, data silos and the environmental impact of data centers.

The post 4 Ways Jeff Rubin Is Thinking About AI Right Now appeared first on Íű±ŹĂĆ Today.

]]>
STEM 4 Ways Jeff Rubin Is Thinking About AI Right Now

Rubin speaks with a packed Founders Room crowd of students, faculty and staff on the current AI landscape. (Photo by Chuck Wainwright)

4 Ways Jeff Rubin Is Thinking About AI Right Now

The University’s chief digital officer shares insights on the job market, data silos and the environmental impact of data centers.
Jen Plummer April 10, 2026

Ask what keeps him up at night about artificial intelligence and you won’t get a single answer.

The University’s senior vice president for digital transformation and chief digital officer is tracking several threads at once: how AI can reshape higher education, why the job market isn’t collapsing the way headlines suggest, what it will take to rebuild trust in online content, the need for regulation and where the University’s massive stores of data fit into all of it.

Rubin shared some of his recent thinking as a panelist at a Maxwell School fireside chat on digital transformation and AI in New York state. Here are four takeaways.

1:
The Job Market Will Shift, But History Offers Perspective

Despite recent headlines about mass layoffs, Rubin argues the data tells a more nuanced story. He pointed to finding that less than 1% of the 1.4 million layoffs tracked in 2025 were attributable to AI.

He compared the moment to the mid-1990s, when the commercialization of the internet changed what people could accomplish in an eight-hour workday. Work didn’t disappear; it shifted. AI, he says, is the next version of that shift.

Those who don’t learn to incorporate AI into their field will find themselves at a disadvantage, Rubin says—and that applies to every discipline, not just technical ones.

That’s part of why he’s pushing for digital literacy to become a standard part of a liberal arts education.

“We need humanities, we need social science, we need math,” he says. “But where’s digital literacy?”

2:
Trust Is a Solvable Challenge, But a Serious One

Rubin was candid about the current crisis of trust around AI-generated content. He described himself as someone who lives and breathes AI daily yet still struggles to tell real media from fabricated material.

“I feel like I’m the most gullible person because when I read something or my kids send me something, I don’t know if it really happened or not,” he says. “And so now I’m spending my time trying to verify information.”

The flood of low-quality, machine-generated content online—“AI slop”—is significant, but he says it’s solvable. He pointed to ideas like watermarking verified media or blockchain-based content verification, though he noted that solutions will need to work at a global scale, not just a state or federal one.

Closer to home, Rubin says the University is trying to lead by example. When Syracuse builds a new tool—such as its new AI-powered class search tool, —he wants users to see how it works, what it can answer, what it won’t and what guardrails are in place.

“Transparency and responsibility are going to be a big part of this,” Rubin says.

3:
AI Thrives on Data (And Higher Education Has Plenty of It)

When asked what excites him most about AI’s potential, Rubin zeroed in on data. For decades, institutions like Syracuse have built data systems that serve individual functions well—enrollment data, alumni data, class data—but don’t always connect to one another.

“AI is not afraid of data,” Rubin says. “The more you can give it, the better it’s going to be.”

When those data silos are combined, the possibilities shift. The University could leverage the siloed data, with AI’s processing capacity, to ensure students aren’t slipping through the cracks, help them find the right courses and clubs and engage alumni in more meaningful ways—just to name a few potentials.

4:
The Environmental Cost Is Real, and Will Likely Get Worse Before It Gets Better

Rubin didn’t shy away from the impact of AI’s environmental footprint. Data centers require massive amounts of energy, and the demand is growing faster than the clean energy infrastructure needed to power them.

“Over the next five to 10 years, we are going to use a lot of carbon to build our data centers and keep up with the demand,” he says.

Building out cleaner energy sources—such as nuclear power—takes time, potentially a decade or more. In the interim, Rubin says, the industry will need to develop more energy-efficient AI models that require less computing power to run.

It’s a tension Rubin acknowledges plainly: the technology that promises efficiency gains is itself an enormous energy consumer, and the path forward requires both better infrastructure and better engineering.

“These are very active policy conversations that are happening right now,” he says.

To learn more the University’s AI efforts, visit the and subscribe to the bi-weekly .

The post 4 Ways Jeff Rubin Is Thinking About AI Right Now appeared first on Íű±ŹĂĆ Today.

]]>
A man in a navy suit with an orange Syracuse "S" lapel pin and a gold-and-blue striped tie speaks into a handheld microphone while gesturing with his left hand during a panel discussion.
Maxwell Fireside Chat Examines AI’s Role in Government and Higher Education /2026/04/06/maxwell-fireside-chat-examines-ais-role-in-government-and-higher-education/ Mon, 06 Apr 2026 19:22:02 +0000 /?p=335810 Two leaders in digital strategy discussed the policy, ethical and practical challenges of bringing AI into government operations and campus life.

The post Maxwell Fireside Chat Examines AI’s Role in Government and Higher Education appeared first on Íű±ŹĂĆ Today.

]]>
Campus & Community Maxwell Fireside Chat Examines AI’s Role in Government and Higher Education

From left, Maxwell Dean David M. Van Slyke with fireside chat guests Jeanette Moy, commissioner of the New York State Office of General Services, and Jeff Rubin, Íű±ŹĂĆ's chief digital officer (Photos by Chuck Wainwright)

Maxwell Fireside Chat Examines AI’s Role in Government and Higher Education

Two leaders in digital strategy discussed the policy, ethical and practical challenges of bringing AI into government operations and campus life.
Jessica Youngman April 6, 2026

Artificial intelligence (AI) is reshaping how governments operate, how universities teach and how public institutions make decisions.

That was the central message of a recent fireside chat hosted by the . Dean moderated the conversation which brought together two leaders working at the forefront of AI adoption: , commissioner of the New York State Office of General Services (OGS), and , Íű±ŹĂĆ’s senior vice president for digital transformation and chief digital officer.

“The question before us is not whether AI will transform public life,” Van Slyke said. “It’s whether our institutions are ready to lead that transformation thoughtfully, equitably and effectively.”

Three panelists participating in a moderated discussion, with an audience visible in the foreground.
A recent fireside chat hosted by the Maxwell School brought together two leaders working at the forefront of AI adoption.

Personalizing Learning and Expanding Access

Rubin opened the March 26 event with a claim about the stakes for higher education: AI, he said, has the potential to transform how universities teach in ways not seen in 200 years. “The idea of a professor standing in front of a room, lecturing—and students taking notes and then being assessed through projects, papers and exams—that model has not shifted,” he said. “What AI allows you to do is personalize learning.”

Personalization at scale has long been a challenge because no instructor can simultaneously tailor a course to every student’s pace and needs, he said. AI changes that equation.

Rubin shared how Syracuse has deployed more than 30,000 AI licenses across campus to drive equitable access and data security. Some students had already purchased AI tools on their own, while others could not afford them, he pointed out. Faculty and staff also needed a secure environment for uploading sensitive documents without routing data through commercial platforms.

Rubin also highlighted a less-discussed dimension of the University’s AI work: a private wireless network, built in partnership with JMA Wireless, that supports thermal sensors in academic buildings across campus. The sensors detect occupancy without capturing identifying information, allowing the University to optimize janitorial services, plan building capacity and, eventually, adjust heating and cooling based on actual use patterns.

A Measured Approach to Government AI

Moy noted that the state’s deliberate pace of technology adoption is a necessary safeguard rather than a liability. “I would contend that it’s important that government is risk-averse,” she said. “The information that we hold is really important—Medicaid data, health data, testing information. The importance of that stewardship becomes paramount.”

Her office oversees roughly 30 million square feet of state real estate, manages 1,500 procurement contracts valued at $44 billion and administers a design and construction portfolio of approximately $5.7 billion. Moy described the agency’s AI strategy as a measured approach. It involves first identifying low-risk, high-value applications, then building the data infrastructure to support them, and ensuring legal and operational frameworks are in place before scaling.

Moy said one of OGS’s most tangible AI investments is in procurement search. Agencies and municipalities navigating the state’s contract catalog often struggle to find what they need, undermining the efficiency those contracts are designed to provide. Moy said AI-assisted search is a logical starting point: low risk, no job displacement and an immediate opportunity to test what the technology can do.

The agency is also piloting AI-powered document summarization tools for bid documents and contract histories which are reported to save up to three hours per day.

Moy noted that backlogs present another opportunity, as they are a universal challenge across the public sector. She explained that while AI could help alleviate some of those challenges, agencies must be cautious; they cannot hand out productivity tools to every worker without first creating the right frameworks.

Jobs, Regulation and What Comes Next

Both speakers addressed audience concerns about AI’s impact on jobs—a topic that has gained urgency in New York following Governor Kathy Hochul’s , which is tasked with studying AI’s effects on the labor market.

Rubin cited research suggesting that less than 1% of the 1.2 million layoffs recorded in 2025 were directly attributable to AI, arguing that economic factors and structural business decisions are doing more to reshape the workforce than the technology itself. He expressed confidence that AI will ultimately create more jobs than it displaces, though he acknowledged that every job will change.

“If you don’t know how to incorporate AI into your domain and discipline, you will be at a disadvantage,” he said. “Students need to have the tools and the classes.”

Moy recalled the dot-com era and the transformation of publishing that upended models at institutions like the Brooklyn Public Library, where she once served as chief strategy officer. The fear and exuberance that accompanied those transitions, she said, mirrors what society is experiencing today.

“We want to make sure that we’re thinking about it ethically, that we’re balancing it according to public need,” she said. “And we’re having active conversations about those trade-offs.”

Both panelists returned repeatedly to the theme of transparency in AI systems, government data and institutional communications.

Rubin pointed to Anthropic’s practice of publishing system prompts as a model for responsible AI deployment and noted that Syracuse recently launched an AI-powered course search tool, called , that similarly makes its operating parameters visible. He also raised the challenge of AI-generated media and the difficulty of distinguishing real content from fabricated content online.

Student holding a microphone and asking a question while seated among peers during a discussion.
The fireside chat included an opportunity for members of the audience, many of whom were students, to ask questions of the panelists.

An Open and Ongoing Dialogue

The conversation drew questions from the audience.

A first-year Maxwell student and member of the University’s United AI club asked what precedent a recent court ruling holding social media platforms liable for algorithmic harm to minors sets for the future of AI regulation and whether platforms like ChatGPT should face similar oversight.

Rubin was direct: “We made the mistake with social media. These companies should have an obligation to have guardrails.”

Moy pointed to Hochul’s recent policy proposals targeting addictive technology, including requirements for more restrictive default settings on children’s accounts. She acknowledged that government is often a step behind rapid technological change, but argued that intervention becomes necessary when innovation results in public harm.

A second student raised concerns about AI’s potential to enable fraud, including falsified documents and biased algorithms.

“These are very real questions,” she said, emphasizing that OGS is working to understand its uses and risks. She argued that the answer isn’t avoiding AI but understanding it well enough to spot its misuse. “If we don’t understand it, we will fall behind.”

Rubin agreed, framing the detection challenge as both technological and philosophical: As AI becomes embedded in everything from autocomplete to document editing, defining what counts as “AI-generated” becomes increasingly difficult. “My gut is almost every piece of content out there will have some AI piece to it, assisting us,” he said. “So, it’s a technology challenge and a societal challenge.”

Van Slyke closed by noting that Maxwell’s role in preparing students for public service has always meant equipping them not just with technical knowledge, but with the ability to navigate the policy, governance and ethical dimensions that accompany it.

“The question is not what will AI do to our institutions,” he said. “It’s what will we choose to do with it.”

The post Maxwell Fireside Chat Examines AI’s Role in Government and Higher Education appeared first on Íű±ŹĂĆ Today.

]]>
Three professionals smiling and standing together in a formal meeting room with framed portraits on the wall
Students Unite Around AI By Bringing Diverse Voices to Technology’s Future /2026/04/02/students-unite-around-ai-by-bringing-diverse-voices-to-technologys-future/ Thu, 02 Apr 2026 15:52:45 +0000 /?p=335337 RSO United AI brings together students across majors to explore artificial intelligence through projects, discussions and community building.

The post Students Unite Around AI By Bringing Diverse Voices to Technology’s Future appeared first on Íű±ŹĂĆ Today.

]]>
Campus & Community Students Unite Around AI By Bringing Diverse Voices to Technology’s Future

Orion Goodman (left) and Tyler Neary, co-founders of United AI (Photo by Reed Granger)

Students Unite Around AI By Bringing Diverse Voices to Technology’s Future

RSO United AI brings together students across majors to explore artificial intelligence through projects, discussions and community building.
Jen Plummer April 2, 2026

When Tyler Neary ’27and Orion Goodman ’27 scattered flyers across campus last spring advertising a new AI club, they saw a critical need: students needed to be included in conversations about a technology that would fundamentally reshape their futures.

“AI was at the point where it could help people in every single major, in every single profession, in every single job,” says Neary, a civil engineering major who co-founded United AI with Goodman, a biomedical engineering major, both in the (ECS). “We realized this was no longer just a computer science thing.”

What started as a room of 10 people has grown into , a recognized student organization (RSO) with more than 100 members representing every single school and college and most majors. Since its fall semester launch, the club’s focus has been democratizing AI literacy and ensuring students from all disciplines have a seat at the table as this technology transforms society.

Students seated at classroom desks using laptops during a group discussion, with “AI in the News” displayed on a screen
Members of United AI engage in dialogue at a recent general meeting. (Photo by Reed Granger)

The group will host a on Saturday, April 25, from 1 to 5 p.m. in the K.G. Tan Auditorium in the National Veterans Resource Center at the Daniel and Gayle D’Aniello Building, featuring industry speakers, demonstrations and faculty research showcases.

Why Students Need Leadership in AI Development

For Goodman, the urgency became clear watching rapid AI development. “When I’m going through college, watching AI capabilities escalate, it can be disempowering—and I figured my peers may be feeling the same way,” he says. “It felt threatening because there’s a small group of people making decisions about how the technology is being used, and others feel like they’re being left behind.”

That sense of being sidelined drove the co-founders to create what Neary describes as an empowerment space. “Something that we say a lot in the club is: don’t get used by AI, use AI to your benefit,” he says. “We’re the ones who are going into the workforce leading the charge and determining how we will use this technology now and into the future.”

The message resonated. Within weeks of tabling at campus events, students from ECS, the Maxwell School of Citizenship and Public Affairs, the College of Arts and Sciences, the Newhouse School of Public Communications, the Whitman School of Management and the College of Visual and Performing Arts were showing up to meetings, eager to understand how AI would affect their fields and futures.

Bringing Humanities and STEM Into Conversation

When Alex Kahn ’27, a junior studying citizenship and civic engagement and political philosophy in the | , discovered United AI, he wasn’t looking for coding or technical skills, but was compelled by the policy implications of AI that were dominating news headlines. “AI was in every story, across every industry, and it felt like there was no escaping it and how it will affect you,” Kahn says.

As United AI’s recruitment director last fall, Kahn became instrumental in broadening the organization beyond its engineering roots. His approach focused on relevance rather than technical expertise. The interdisciplinary composition has transformed conversations within the club.

“Having people from different majors and disciplines means having that understanding that everyone’s mind works differently,” Kahn says. “The people who are writing code are not thinking the same way as the person majoring in fine arts, and having that creativity along with those technical skills, you’re able to build and think much differently.”

Goodman appreciates what non-engineering perspectives bring to the table. “As conversations around AI progressed, I began asking, ‘Where are the artists? Where are the policymakers? Where are the humanities majors?’” he says. “A lot of the population was not behind building this technology and still isn’t—but how do we provide a space for them to learn and join the conversation?”

From Concept to Creation: Student Projects Take Shape

Three students standing together and smiling in front of a projected presentation screen
From left: First-year students Neha Redda, Ria Yagielski and Paige Siciliano won second place during the fall project cycle for their AI-powered schedule builder.

United AI goes beyond theoretical discussion to hands-on application. Through four-week project cycles, students receive funding, access to premium AI tools and mentorship to develop their ideas.

Paige Siciliano ’29, a computer engineering major, led a second-place winning project during her first semester on campus. Her team’s AI-powered schedule builder, still under development, helps students manage their time by generating personalized daily plans based on individual learning styles, fixed commitments and flexible tasks.

For Siciliano and her teammates—Neha Redda ’29 and Ria Yagielski ’29—the project provided more than AI experience. “It really helped us find a way into the community of Syracuse, and it helped us feel like we belonged,” she says.

Building Community Around Shared Curiosity

Beyond projects and programs, United AI has cultivated what Kahn describes as “a school of thought on campus.” During a debate night last semester, members discussed everything from business applications to environmental impacts to personal usage philosophy, with some participants there simply to understand the technology rather than use it. “Being surrounded by club members and in this community of lifelong learners, we focus our educational efforts to not just learn the technical side, but also on practical application,” Kahn says.

Siciliano emphasizes the club’s welcoming atmosphere. “We came in as first-semester freshmen, two weeks into school. It didn’t matter if we had no background knowledge in AI or all the knowledge in the world—they create an atmosphere that makes you want to learn about it and continue to grow.”

To join United AI, . To learn more, follow the organization on or .

Group of students standing together in front of a United AI Winter Summit presentation slide.
Club members gather at the United AI Winter Summit in December 2025.

The post Students Unite Around AI By Bringing Diverse Voices to Technology’s Future appeared first on Íű±ŹĂĆ Today.

]]>
Two men smiling with arms around each other in front of a United AI logo display.
Whitman, Libraries Launch Information Literacy Certificate /2026/03/23/whitman-libraries-launch-information-literacy-certificate/ Mon, 23 Mar 2026 16:45:56 +0000 /?p=334832 The new digital badge program helps undergraduate and graduate business students build research and critical thinking skills for the AI-driven workplace.

The post Whitman, Libraries Launch Information Literacy Certificate appeared first on Íű±ŹĂĆ Today.

]]>

Whitman, Libraries Launch Information Literacy Certificate

The new digital badge program helps undergraduate and graduate business students build research and critical thinking skills for the AI-driven workplace.
Cristina Hatem March 23, 2026

and the have partnered to launch an , a new self-paced credential designed to help business students evaluate sources, identify misinformation and apply research skills in a professional landscape increasingly shaped by artificial intelligence (AI).

The program, offered in collaboration with the Office of Microcredentials, is open to both Whitman undergraduate and graduate students and encourages the development of core skills in information literacy, which is a crucial competency for academic pursuits, and one that employers also describe as being essential. The skills learned also connect to the University’s of Information Literacy and Technological Agility and Critical and Creative Thinking.

“For Whitman students, the certificate fills a meaningful gap between classroom learning and professional readiness,” says Assistant Director of Experiential Programs Roshawn Kershaw. “It increases a student’s ability to find reliable information, assess its credibility and apply it with confidence. This is important for a business environment increasingly shaped by excess data and AI content. It sets them apart from others before they even realize. The certificate is now available to both undergraduates and graduate students, which means it can meet Whitman students wherever they are in their academic journey, reinforcing skills that will serve them from their first internship to the boardroom.”

To earn the certificate and digital badge, students take online self-paced tutorial modules that introduce them to key information literacy skills and library resources:

  • Identifying Bias and Misinformation
  • Types of Sources
  • Evaluating Information
  • Research as Process
  • Search Basics, Part 1
  • Search Basics, Part 2
  • Syracuse Libraries Resources
  • Student Guide to AI

“I am so excited to have these online tutorials become an official certificate and digital badge that is now available to both grads and undergrads,” says Librarian for Business, Management and Entrepreneurship Steph McReynolds. “We’ve offered the tutorials as part of the program for years, and students have asked for a certificate to show employers their accomplishments in this area, and now we can provide that digital credential.”

Information Literacy Librarian Kelly Delevan sees this certificate as an excellent template for the development of information literacy badges for other schools and colleges at Syracuse. The certificate is even serving as a model beyond our institution, as a librarian from another university has recently reached out to use the certificate module categories at their own library.

The post Whitman, Libraries Launch Information Literacy Certificate appeared first on Íű±ŹĂĆ Today.

]]>
Anthropic-Pentagon Dispute Reveals Limits of AI Self-Regulation, Expert Says /2026/03/13/anthropic-pentagon-ai-self-regulation/ Fri, 13 Mar 2026 16:15:23 +0000 /?p=334319 Hamid Ekbia, director of Íű±ŹĂĆ's Autonomous Systems Policy Institute, examines the political and economic forces behind the Anthropic-Pentagon standoff and what it means for the future of AI self-regulation.

The post Anthropic-Pentagon Dispute Reveals Limits of AI Self-Regulation, Expert Says appeared first on Íű±ŹĂĆ Today.

]]>

Anthropic-Pentagon Dispute Reveals Limits of AI Self-Regulation, Expert Says

AI policy expert Hamid Ekbia examines why the Anthropic-Pentagon dispute was inevitable and what it reveals about the limits of industry self-regulation.
Christopher Munoz March 13, 2026

Can an AI company take government money and still set limits on how its technology is used? That question is at the center of an ongoing dispute between the Pentagon and Anthropic, and Íű±ŹĂĆ professor Hamid Ekbia says it exposes fundamental tensions in how the AI industry operates.

Ekbia, founding director of the Academic Alliance for AI Policy, says the Pentagon’s demand that Anthropic either change its approach or forgo its lucrative contract is a vivid example of current federal policy. “With the bulk of public AI funding in the U.S. still coming from defense, companies either have to budge or shut themselves out from this unique source of money,” Ekbia says.

While Anthropic has adjusted some safety policies, it has so far declined to allow its technology to be used for domestic surveillance or autonomous drones, a distinction Ekbia says matters.

“That is cause for celebration for any observer concerned about such applications,” he says. “But the question going forward is whether this will continue to be the case.”

Political and Economic Forces

Ekbia says the pressure on Anthropic reflects a broader shift in the federal government’s approach to AI regulation.

“The anti-regulatory policies of the Trump administration don’t leave much room for safety-oriented approaches to AI,” he says, adding that those policies push companies and oversight bodies toward “aggressive and often reckless behaviors in the name of innovation.”

Market competition makes the pressure worse. “The AI ecosystem is defined by furious competition among a few big players in a race to grab the lion’s share of the spoils in a rapidly growing industry,” Ekbia says. “The ‘moral economy’ of the AI industry is one of the jungle, where only the most reckless, ruthless, and aggressive behaviors are expected to be rewarded.”

Employees as a Wild Card

One factor that could shape the outcome is pressure from within Anthropic itself. Ekbia says employee resistance has played a meaningful role so far, with workers vocal during negotiations and leadership appearing to take that seriously.

But he cautions that employee influence is not guaranteed to last. “How critical will employees be in the future of the company given the current wave of white-collar under-employment, and how assertive will they be in expressing their resistance?” he says.

He outlines several other variables that will determine how the situation unfolds: whether competing AI companies are willing to fill the gap for the Pentagon, how hard the Trump administration continues to push for broad access to AI technology, and how well Anthropic can sustain itself financially without defense funding.

“The speed of change in these areas makes it hard to make solid predictions,” Ekbia says.

The Limits of Self-Regulation

Ekbia says the dispute ultimately tests a premise that Anthropic has staked its reputation on—that a company can be both commercially successful and a responsible steward of powerful technology.

“In the absence of federal policy, Anthropic aspired to play that role in the industry,” he says. “What is happening shows the limited efficacy of that aspiration. Society cannot rely on the industry to self-police itself, despite even the best intentions.”

He connects that failure to a broader culture in Silicon Valley, where prominent figures publicly embrace “effective altruism”—the idea that profit and doing good can coexist.

“The case of Anthropic shows how much of an illusion this is,” Ekbia says. “As the old saying goes, you cannot have your cake and eat it too.”

Faculty Expert

University Professor

Media Contact

Christopher Munoz
Media Relations Specialist

The post Anthropic-Pentagon Dispute Reveals Limits of AI Self-Regulation, Expert Says appeared first on Íű±ŹĂĆ Today.

]]>
Two military soldiers in tactical gear viewed from behind, silhouetted against a glowing digital world map with red and blue data visualizations
Newhouse Assistant Professor Recognized Nationally for Innovation in Teaching /2026/03/04/newhouse-assistant-professor-recognized-nationally-for-innovation-in-teaching/ Wed, 04 Mar 2026 20:27:51 +0000 /?p=333968 The award also recognizes Milton Santiago’s work in exploring the ethical and practical applications of generative artificial intelligence in visual communications.

The post Newhouse Assistant Professor Recognized Nationally for Innovation in Teaching appeared first on Íű±ŹĂĆ Today.

]]>

Newhouse Assistant Professor Recognized Nationally for Innovation in Teaching

The award also recognizes Milton Santiago’s work in exploring the ethical and practical applications of generative artificial intelligence in visual communications.
Genaro Armas March 4, 2026

Milton Santiago, assistant professor of visual communications in the S.I. Newhouse School of Public Communications, has received the 2026 Innovation in Teaching Award (BEA)—one of the organization’s most prestigious honors for media educators.

The award recognizes Santiago’s progressive, hands-on approach to teaching cinematography and visual storytelling, including his work exploring the ethical and practical applications of generative artificial intelligence in . He will be recognized at BEA’s annual convention in Las Vegas on April 17.

brings more than 15 years of professional experience in the film and television industry to the classroom. Before joining the Newhouse School in 2021 he worked as a freelance cinematographer and content creator in Los Angeles, shooting feature films and documentaries that screened at the Tribeca Film Festival, the Copenhagen International Film Festival and SXSW EDU. He has created content for brands including Disney, Procter & Gamble and Levi’s, and previously held production roles at Showtime Networks and Sundance Channel.

In November 2025, Santiago launched the Newhouse School’s in partnership with Adam Peruta, an associate professor and director of the . The two-day program combined hands-on workshops with a fast-paced content creation competition to explore how generative AI is transforming creative workflows.

At Newhouse, Santiago also serves as director of the program, a longstanding initiative that trains active-duty service members in communications, photography, design and video production.

His teaching has earned multiple honors, including Íű±ŹĂĆ’s Laura J. and L. Douglas Meredith Teaching Recognition Award for Early Performance, and the University Film and Video Association Teaching Excellence Award for Junior Faculty.

The post Newhouse Assistant Professor Recognized Nationally for Innovation in Teaching appeared first on Íű±ŹĂĆ Today.

]]>
Man with dark hair, smiling
ECS Launches Minor in Artificial Intelligence Science and Engineering /2026/02/11/ecs-launches-minor-in-artificial-intelligence-science-and-engineering/ Wed, 11 Feb 2026 20:01:35 +0000 /?p=332682 The minor, beginning this fall, will prepare students to thrive in an artificial intelligence driven environment and provide them with highly marketable skills.

The post ECS Launches Minor in Artificial Intelligence Science and Engineering appeared first on Íű±ŹĂĆ Today.

]]>

ECS Launches Minor in Artificial Intelligence Science and Engineering

The minor, beginning this fall, will prepare students to thrive in an artificial intelligence driven environment and provide them with highly marketable skills.
Alex Dunbar Feb. 11, 2026

A new minor in artificial intelligence science and engineering is designed to equip students with essential knowledge and skills in one of today’s most transformative fields. The minor, offered through the College of Engineering and Computer Science (ECS), will launch in the Fall 2026 semester.

New technologies such as Anthropic’s Claude and OpenAI’s ChatGPT are changing paradigms. The entire technology industry is pivoting toward the embrace of artificial intelligence. Coding agents are changing the way software is developed. Retrieval-augmented generation is changing the way companies manage data, and new systems promise further disruption. The new minor is designed to prepare students to thrive in this environment—providing them with skills highly sought after by employers in the age of AI.

The 18-credit program combines core computing principles with specialized AI coursework, preparing graduates to navigate and contribute to the rapidly evolving landscape of artificial intelligence. It can be easily paired with other STEM majors.

The minor requires completion of 18 credits divided into two components:

Computing Foundations (nine credits): Students build essential technical skills through coursework focused on computational disciplines, establishing the groundwork necessary for advanced AI study and providing the programming and mathematical basis to understand advanced concepts such as language models and supervised machine learning.

AI Fundamentals and Programming (nine credits): These courses delve into artificial intelligence concepts, methodologies and applications, enabling students to develop expertise in this cutting-edge field. Courses include a strong focus on machine learning, using generative AI systems to create software and understanding large language models for various applications such as retrieval-augmented generation.

This minor is open to all University undergraduate students. It is designed for students seeking to enhance their primary degree with AI competencies.

Graduates of the program will possess key knowledge in artificial intelligence, positioning them competitively for careers in technology, research, data science and emerging AI-driven industries. As organizations across sectors increasingly integrate AI into their operations, this minor provides students with highly sought-after qualifications.

For more information about admission requirements and course offerings, students should contact their academic advisor or Priyantha Kumarawadu, associate teaching professor of electrical engineering and computer science and computer science undergraduate program director, at spkumara@syr.edu.

The post ECS Launches Minor in Artificial Intelligence Science and Engineering appeared first on Íű±ŹĂĆ Today.

]]>
Two male students work together at a computer
Researcher Examines Use of AI in Young Adults’ Romantic Lives /2026/02/05/researcher-examines-use-of-ai-in-young-adults-romantic-lives/ Thu, 05 Feb 2026 18:35:46 +0000 /?p=332297 Associate Professor Rebecca Ortiz surveyed young adults about AI companions and found both potential benefits and troubling behaviors with the technology.

The post Researcher Examines Use of AI in Young Adults’ Romantic Lives appeared first on Íű±ŹĂĆ Today.

]]>
Communications, Law & Policy Researcher Examines Use of AI in Young Adults’ Romantic Lives

(Photo courtesy of Adobe Stock)

Researcher Examines Use of AI in Young Adults’ Romantic Lives

Associate Professor Rebecca Ortiz surveyed young adults about AI companions and found both potential benefits and troubling behaviors with the technology.
Dialynn Dwyer Feb. 5, 2026

The growing day-to-day use of AI may come as no surprise, with its integration into daily tools like email, smart devices and social media.

But it’s also becoming more common for people to engage with AI for emotional or romantic companionship. Newhouse School of Public Communications associate professor —who studies youth, media and sexual health—decided to examine the trend. Hearing from young people—and seeing —about how they use AI to build relationships or understand their own human connections prompted her research.

“If I’m going to continue to research how media play a role in young people’s lives, particularly as it relates to their sexual health and romantic relationships, AI chatbots and companions are going to have to be part of that conversation,” she says.

Ortiz and her colleagues surveyed young people to learn how they’re using AI chatbots or companions for romantic, emotional or sexual purposes and how that use relates to their own romantic boundaries and communication.

“One of my questions was, ‘How might use of AI for romantic companionship result in helpful or harmful outcomes?’’” Ortiz says. “For example, could practicing communication with an AI companion help someone communicate with a human partner, such as practicing how to flirt or how to say things they might feel uncomfortable saying?”

The Survey

A person with long light‑colored hair wearing a dark blazer and decorative earrings against a dark background.
Rebecca Ortiz

Ortiz and her colleagues surveyed 1,500 18-to 21-year-olds. They ended up with about 360 respondents reporting they used AI for romantic companionship.

About two-thirds of the 360 said they used AI companions similar to a long-term romantic relationship, communicating over a period of time rather than a single, one-off interaction.

Ortiz says quite a few reported using AI to “practice” how to engage in their human relationships. One participant shared they were having some problems with their romantic partner, and they used AI to roleplay how they might cheer up their loved one or help them feel better.

“This respondent said it gave them some guidance for what to do,” Ortiz says. “Then there were some people who said, ‘It helped me figure out how to flirt. It helped me work through some of the awkwardness of communication.’ So at least some young people are using these companions to practice or get a sense of what it would be like when they take it to a human relationship.”

Areas of Concern 

Ortiz says one concern she and her colleagues observed was that some AI companions default to sexually aggressive language or exchanges that do not follow a constructive, consenting back-and-forth.

“This is concerning regardless of what age you are, but we are particularly concerned for young people who are still learning how to communicate about consent and boundaries,” Ortiz says.

What they found is some of the apps, if given an indication the user was interested in sexual or romantic communication, would almost immediately become sexually aggressive.

Ortiz says those responses are a red flag that AI companions can model unhealthy, abusive communication, an important element to further examine and include in discussions about AI companions.

Experiencing Stigma 

The survey asked participants about their emotional connection with AI companions and whether they felt the tool understood their emotions, among other questions probing the relationship between the young adult and the technology.

What Ortiz says she found is some of the young people did express a strong connection with the AI companion, with some listing loneliness as a motivation for usage.

Even with the belief that AI could be a “safe space,” Ortiz says her survey indicates there is still stigma around using AI tools for romantic or sexual purposes.

“Many in the survey reported that they thought using AI for romantic companionship was weird, unacceptable, not a normal thing to do,” Ortiz says. “Most of the respondents didn’t think this was a common behavior among people their age, but you can see there is a good chunk of young people who are using it for these purposes.”

What Should Be Asked Next

Ortiz says there is not a clear indication that using AI companions or chatbots for romantic companionship is leading to healthier outcomes for most users.

“Unfortunately, the results show that, for some users, engaging with these AI companions has the potential to be related, not necessarily causing, but related to less healthy romantic beliefs and behaviors,” she says.

Ortiz hopes her work can serve as a warning sign to people creating companion apps or platforms like ChatGPT that boundaries and guardrails should be built-in so users can engage in healthy, safe ways.

People are building real relationships with AI companions, and the goal should be to understand and ensure there are healthy outcomes, without being too judgmental, she says.

“AI companionship is not going away,” Ortiz says. “So the question should be, how can this be used in more helpful than harmful ways if we know young people are going to use it? It’s just another tool for young people to help make sense of themselves, and we should be open to understanding that if we want to help them build healthy relationships.”

The post Researcher Examines Use of AI in Young Adults’ Romantic Lives appeared first on Íű±ŹĂĆ Today.

]]>
A person using a smartphone, holding it in one hand and tapping the screen with the other.
Why People Misinterpret the News /2026/02/02/why-people-misinterpret-the-news/ Mon, 02 Feb 2026 14:41:30 +0000 /?p=332091 Mass communications researcher Jamie Gentry studies how political stories change as they move from newsrooms to social media.

The post Why People Misinterpret the News appeared first on Íű±ŹĂĆ Today.

]]>

Why People Misinterpret the News

Mass communications researcher Jamie Gentry studies how political stories change as they move from newsrooms to social media.
John Boccacino Feb. 2, 2026

When doctoral student Jamie Gentry G’27 covered politics as a local news reporter for the weekly Navarre Press in northwest Florida, she turned potentially complicated issues into easy-to-understand stories.

A person smiles while posing for a headshot in front of an ivy covered wall.
Jamie Gentry

But Gentry was amazed at how often people would misinterpret, misconstrue or misremember the information presented in her articles. She overheard many conversations in person and online where citizens, equipped with this misinformation, carried out emotional arguments on a topic using incorrect information.

“I started wondering why I wasn’t able to reach as many people as I could with the actual facts of a story,” Gentry says. “It was frustrating because my job is to give people the best possible information. People need good information to make good decisions, and journalists are supposed to do that. But I found the system wasn’t working.”

Gentry knew there was a disconnect between how political news was being reported and how it was being talked about in her community. She vowed to become part of the solution.

How to Fix a Broken System

Driven by her reporting experiences, Gentry transitioned from journalism to higher education and began pursuing a doctoral degree in mass communications from the .

With a grant from the University’s , Gentry’s ongoing research explores how artificial intelligence (AI) tools used by journalists impact how politics are discussed online and in the real world.

Gentry is comparing how people respond to and discuss a complicated news topic among their communities and on their social media channels under two different scenarios.

Out of 400 online survey respondents, one group is tasked with reading a traditional news story about unemployment, while another digests the information with the help of an AI-generated key takeaways breakout box. Half of the participants are told to share their impressions of the article with someone they know face-to-face, while the other half are tasked with sharing a post about the topic on social media.

Person scrolling through social media feed on smartphone.

At each step, from the journalist sharing their reporting to the survey participant consuming the content to the person receiving the news, there’s an opportunity for the message to change from the original reporting.

“Generally, people tend to accept facts, but we still see arguments over facts online, and we see that people become very polarized,” Gentry says.

An important trend in the political communications research field—combining the study of media and political science—is examining how, in an increasingly polarized country, being divided politically impacts the quality of political reporting.

Especially during this “explosion of media choice” where people have more ways to consume the news, Gentry says this increase in choice means people are opting for stories they want to consume that align with their political ideology.

“That has a real impact on how people engage with politics and how they interpret the news they receive,” Gentry says when identifying an area for future research. “It’s not so much that people are blatantly believing misinformation and don’t care about facts. It’s more that partisanship is impacting how people receive messages and what stories they do and do not see.”

Can AI Be Trusted?

As informers, journalists are charged with breaking down complex topics into digestible content, and they make decisions about what information to include, which sources to interview and which stories to cover.

When she was covering the news, Gentry says it was easy to think she knew what the most important angles were, but as more journalists use AI to produce story summaries, Gentry says it’s natural to wonder whether AI can convey this important information.

“Journalists influence how people learn about and understand a subject matter. Should we be trusting these AI tools to reliably make decisions about what is the most important part of a story?” Gentry says. “Whatever AI decides is the most important snippet of information is being pushed out and that has real implications for how people are getting the news and what they actually know about a story.”

Robotic hand typing on computer keyboard.

Gentry expects to receive data from her survey participants later this semester. Among her anticipated findings: story summaries make the facts more accessible and easier to process, retain and share.

“My goal is to make journalists better by giving them the tools to better understand how their work impacts the public,” Gentry says. “By sharing data on what works and what doesn’t, hopefully we can make big improvements in the way the news is shared.”

The post Why People Misinterpret the News appeared first on Íű±ŹĂĆ Today.

]]>
Hand scrolling through news articles on smartphone screen.
College of Law Holds First AI Residency Program /2026/01/20/college-of-law-holds-first-ai-residency-program/ Tue, 20 Jan 2026 19:56:07 +0000 /?p=331395 Students gained new skills, discussed ethical questions and emerged with a sense of urgency to keep pace with this booming technology.

The post College of Law Holds First AI Residency Program appeared first on Íű±ŹĂĆ Today.

]]>

College of Law Holds First AI Residency Program

Students gained new skills, discussed ethical questions and emerged with a sense of urgency to keep pace with this booming technology.
Caroline K. Reff Jan. 20, 2026

How are law firms currently applying Artificial Intelligence in the workplace to maximize client services? What are the ethical implications of using AI in the legal field? How will AI impact the current role of lawyers, and what new jobs may emerge? Should AI be regulated, and, if so, how?

These were just some of the questions addressed during AI and the Virtue of Law, a one-week in-person residency held at the in August designed for students in the , with participation also open to on-campus students. This deep-dive into AI was created and facilitated by .

“I think AI will significantly transform law school education and the practice of law,” says Graves, noting that he sees AI as a means of more effective information sharing but also recognizes that many are “terrified” thinking that this technology could replace them.

“We have to think about being nimble now because the essential human role today will likely be an AI role in just a few years, and we don’t want to be left behind. Through this residency, I wanted to help demystify generative AI because, used properly, it can be an extraordinary tool,” Graves says.

Students with laptops seated at tables face a presenter standing before a projection screen displaying "How did we get here? A brief history of AI."
Professor Jack Graves discusses AI with students during the first AI residency program.

Graves, who has taught in the JDi program for the past five years, has a unique blend of expertise in design, development and delivery of accessible and legal education in an online learning environment and 21st century, technology-leveraged law practice.

A graduate of the University of Colorado Law School, Graves taught the technology-leveraged delivery of legal services at the Touri Law Center for 14 years. Before that, he worked in private practice with Chrisman, Bynum & Johnson PC in Colorado, and as a judicial law clerk for the U.S. Court of Appeals.

First Time Residency a Popular Draw

Logan Gorg L’26 is a JDi student living in Pennsylvania who made the trip to campus to attend the AI residency. She has worked as a paralegal at the law firm of Ross & Ross LLC for the past 10 years and is looking to focus on real estate and probate law upon graduation.

“I learned so much about what AI is, and the information at the residency helped to dispel some of the fears and focus more on where the profession is going,” Gorg says. “Sitting in a room with a group with diverse backgrounds and experiences talking about whether AI was doom or salvation was so interesting. I think the residency showed us that AI is unavoidable, but, if we get out in front of it, we can reap some of the benefits in the legal profession.”

Graves had been contemplating developing a semester-long course in AI for the JDi program, but ultimately he decided that the lightning speed of the technology would be better suited for a short-form, concentrated residency where students with different levels of familiarity could join together to think about being nimble and adapting to technology that is already changing the way the legal field operates.

AI Voice-Driven Technology Used to Teach, Demonstrate Abilities

Coincidentally, the residency took place just as ChatGPT launched Advanced Voice Mode, a significant upgrade that allows for natural, real time conversations using AI. Graves used “Max,” as he named the voice-driven AI technology, to help co-teach the residency and answer students’ questions directly.

“We would have a discussion, and I would say, ‘Max, what do you think?’” Graves says. “At first students were uncomfortable with it, but once Max started responding and asking them questions using the Socratic method, they started to see how fascinating a learning AI tool could be.”

Approach to AI in the Law Resonates With JDi Students

Jenny Cameron L’27, who co-owns VIP Marinas with her husband in Florida, decided to enroll in law school to bring a legal perspective to her family business. She, too, attended the AI residency and walked away amazed.

“Honestly, it was one of those residencies that was life changing,” Cameron says. “Before I attended, I was on the fringes of AI, barely using ChatGPT, but since then I’ve been using AI extensively in some form. Part of law school is practicing and knowing how to use AI better and faster, and what I learned at the residency was eye opening. I commend the College of Law and Professor Graves for taking the lead on this and helping guide us on how we should be approaching this technology.”

Another participant was Bryan Beene ‘26 a high school government teacher from Texas, who is pursuing law school to prepare for a second career once he retires. He hopes to work as a lawyer in the education or church law space.

“I registered for this AI residency for two reasons: one because Professor Graves was teaching it, and he is one of the best professors I’ve ever had; and two, because I had never used AI except for Google searches, and I knew a lack of knowledge around this technology would be a detriment in representing a client,” Beene says.

Beene noted that he enjoyed learning more about the use of the available tools, as well as discussing the legal and ethical issues, and how regulations and the law are often not keeping up with this fast moving technology.

The newly introduced AI and the Virtue of Law residency received “incredible feedback” from students, says Graves, who believes this is a topic that should be revisited once a year.

“This is not a static course, as the technology is changing continuously, but I think the approach resonated well with the students, not only by teaching them skills but by allaying some of their fears while also emphasizing to them that AI technology in the legal field is advancing fast and furiously. So they need to prepare now,” Graves says.

The post College of Law Holds First AI Residency Program appeared first on Íű±ŹĂĆ Today.

]]>
An instructor in a dark suit gestures while speaking to seated students in a bright, modern classroom setting.
The Spoofing Problem: Why Tech Platforms’ Age Verification May Not Protect Minors /2025/12/16/the-spoofing-problem-why-tech-platforms-age-verification-may-not-protect-minors/ Tue, 16 Dec 2025 21:54:57 +0000 /?p=330363 As platforms rush to verify users' ages, experts warn consumer-grade cameras lack the technology to reliably authenticate minors.

The post The Spoofing Problem: Why Tech Platforms’ Age Verification May Not Protect Minors appeared first on Íű±ŹĂĆ Today.

]]>

The Spoofing Problem: Why Tech Platforms’ Age Verification May Not Protect Minors

As platforms rush to verify users' ages, experts warn consumer-grade cameras lack the technology to reliably authenticate minors.
Daryl Lovell Dec. 16, 2025

If you have a kid or teen at home, you’ve probably heard of Roblox—an online gaming platform where millions of users create, share and play games in a virtual world with its own currency.

The company’s new age verification requirement for users under the age 13 sounds like a smart safety measure, but it actually highlights a critical vulnerability affecting gaming platforms, social media sites and any technology company relying on camera-based authentication to protect minors online.

, professor of electrical engineering and computer science at Íű±ŹĂĆ and a Fellow of the Institute of Electrical and Electronics Engineers (IEEE) and American Association for the Advancement of Science, warns that these systems may create a false sense of security. His research focuses on cybersecurity, machine learning and biometrics—including touch-based and network-based authentication systems.

While AI-powered facial recognition can estimate age within two to three years by analyzing wrinkles and skin condition, Phoha warns the systems are “highly susceptible to spoofing.” Simple presentation attacks—like a printed photo, a smooth mask to mimic younger skin or even a silicone dummy face—can fool verification systems, especially when users authenticate through their own laptop or phone cameras lacking sophisticated liveness detection.

As more platforms implement age verification requirements to comply with child safety regulations, Phoha says the technology may create a false sense of security.

Phoha answers three questions about the effectiveness of this technology and is available for interviews on biometric authentication, age verification technology and cybersecurity topics.

What are the the challenges and effectiveness of systems that collect biometric information via palms, eyes, face, etc.?

“Typically, age estimation from a face using a camera relies on features such as wrinkles, skin condition and sagging. The deep learning (AI) models are trained on millions of labeled faces to recognize age-specific features on a face. The training labels adjust for gender, race, etc. Although literature supports high accuracy in age estimation, ranging from a difference of two to three years, I believe the methods are highly susceptible to spoofing.

“Also, the accuracy of authentication methods depends on lighting conditions, pose etc. Typically, the features of the account for the variations in light, pose, etc. Literature also suggests using eyes and typing patterns, including the use of language to predict age, although these studies are not common.”

Tell us more about authentication methods and their reliability for protecting minors online.

“Most authentication is done through deep learning and pattern recognition algorithms. However, all camera-based methods are susceptible to spoofing. Earlier experiments of presentation attacks showed that a copy of a face printed on paper was able to fool the system, because the proposed system by Roblox will use 3D, which can also be fooled, for example, by a simple skin colored (smooth mask to appear as younger, healthier skin). Or a dummy 3D face made of silicone. Because the verification of an individual’s face may be done using the individual’s (laptop or desktop) camera, it will likely not have sophisticated hardware to test for liveness, such as twitching, blood flow, sweat glands, etc.”

What are the broader implications for how tech platforms balance safety with privacy concerns?

“If the features of the face (and not the images of the face) are stored (or destroyed immediately) after collection, one can hope that no personally identifiable information will be discernible.”

Faculty Expert

person smiles and looks into camera
Professor
Electrical Engineering and Computer Science

Media Contact

Daryl Lovell
Associate Director of Media Relations

The post The Spoofing Problem: Why Tech Platforms’ Age Verification May Not Protect Minors appeared first on Íű±ŹĂĆ Today.

]]>
Digital fingerprint scan on smartphone screen
Students Learn to Use AI as ‘Creative Partner’ at Newhouse Summit /2025/12/10/students-learn-to-use-ai-as-creative-partner-at-newhouse-summit/ Wed, 10 Dec 2025 21:40:02 +0000 /?p=330079 The school's Inaugural AI Creative Summit drew industry professionals to engage with students from across University.

The post Students Learn to Use AI as ‘Creative Partner’ at Newhouse Summit appeared first on Íű±ŹĂĆ Today.

]]>
Communications, Law & Policy Students Learn to Use AI as ‘Creative Partner’ at Newhouse Summit

Professor Milton Santiago (left) moderates the AI in Creative Practice panel during the inaugural Newhouse AI Creative Summit. (Photo by Md. Zobayer Hossain Joati)

Students Learn to Use AI as ‘Creative Partner’ at Newhouse Summit

The school's Inaugural AI Creative Summit drew industry professionals to engage with students from across University.
Genaro Armas Dec. 10, 2025

Students raced against the clock to create content powered by artificial intelligence (AI) and mastered cutting-edge tools under the guidance of industry experts during the inaugural AI Creative Summit.

The two-day program combined hands-on workshops with a fast-paced content creation competition to explore how generative AI is transforming creative workflows. More than 60 students, faculty and staff from across the University participated in the summit in November.

“These students are going to enter the workforce with a huge advantage because they’re learning to use AI as a creative partner, not a replacement for their ideas,” said Ken Collins, director of research and development at . Collins led a workshop on AI video generation and served as a mentor and judge.

“The work I saw during the competition showed real strategic thinking about when and how to use these tools effectively,” he said.

Learning From Industry Leaders

Day 1 featured workshops and panels covering AI-powered image, video and audio generation. Students learned from professionals including Hailey Tredo, head of AI at ; Drew Muckell ’15, executive producer at ; and representatives from who demonstrated Firefly Video and AI-powered features in Premiere.

A lunchtime panel titled “AI in Creative Practice” sparked discussions about how professionals integrate these tools into their work while navigating ethical considerations around copyright, authorship and authenticity.

“AI isn’t coming to the communications industry—it’s already here. Our job is to make sure Newhouse students aren’t just keeping up with these changes but leading them,” said , an associate professor and director of the advanced media management master’s program. He and , assistant professor of visual communications, organized the summit.

“This summit gave them a chance to experiment, fail fast and build confidence with tools they’ll be expected to master on day one of their careers,” Peruta said.

Speaker addressing an audience in a classroom-style setting with another presenter standing nearby. A podium, laptop, and whiteboard are visible at the front of the room
Milton Santiago (left) and Adam Peruta provide opening remarks for the Newhouse AI Creative Summit. (Photo by Photo by Md. Zobayer Hossain Joati)

Competition Challenges Creativity

The summit’s second day featured a creative “hackathon,” when students were broken up into 10 teams and given five-plus hours to produce original content for a themed challenge. Visiting professionals and faculty provided guidance and feedback.

Maya Rizzo ’27, an advertising major, said the experience pushed her to think differently about creative problem-solving.

“Having only a few hours to create something from scratch forced us to make quick decisions and trust the process,” Rizzo said. “I walked away with a much better understanding of how AI can speed up the creative workflow without taking over the storytelling.”

Student team entries ranged from cinematic live-action spots to fully animated pieces. The variety showcased the flexibility of AI tools across different creative approaches.

Group of students collaborating around a round table with laptops and drinks in a modern, brightly lit campus space
Advertising major Maya Rizzo ’27 (right) works with her her team on their submission for the AI Creative Summit hackathon. (Photo by Alicia Hoppes)

Competition Winners

First Place: “IKEA Conspiracy Theory”

  • Max Chizmadia ’29, Bandier Program for Recording and Entertainment Industries
  • Jesse Mair ’29, Bandier
  • Theo Stewart, advanced military visual journalism program (AVMJ)

Second Place: “Lost Epoch”

  • Yvonnea Achancho ’27, finance (Whitman School of Management)
  • Connor Blake, AVMJ

Third Place: “Cowboy Cardio”

  • Alex Cai ’26, art photography (College of Visual and Performing Arts)
  • Dean Lourenco ’26, visual communications (graphic design)

The impressive work showcased the creative potential of students when integrating emerging technologies responsibly into their creative storytelling workflows.

“What blew me away was watching students take concepts they learned only a day earlier in the workshops and apply them under real pressure with tools that didn’t even exist a year ago,” Santiago said. He and Peruta hope to hold the summit each year as tools reshaping creative industries constantly evolve.

“Students weren’t just pushing buttons. They were making creative decisions, collaborating and problem-solving,” Santiago added. “Employers are looking for that exact kind of adaptability in the workplace today.”

The AI Creative Summit was supported by Adobe, American High and the .

The post Students Learn to Use AI as ‘Creative Partner’ at Newhouse Summit appeared first on Íű±ŹĂĆ Today.

]]>
Panel discussion at the Newhouse AI Creative Summit with five speakers seated on stage in front of a large screen displaying the event title 'AI in Creative Practice' and speaker names. Audience members are visible in the foreground
Why AI Can’t Replace Computer Scientists /2025/12/10/why-ai-cant-replace-computer-scientists/ Wed, 10 Dec 2025 19:16:54 +0000 /?p=330026 Engineering and computer science students are learning how to build the next generation of AI approaches that run responsibly, efficiently and ethically.

The post Why AI Can’t Replace Computer Scientists appeared first on Íű±ŹĂĆ Today.

]]>
STEM Why AI Can’t Replace Computer Scientists

Mechanical and aerospace engineering professor Zhenyu Gan (left), civil and environmental engineering professor Yizhi Liu (second from left), electrical engineering and computer science department chair Alex Jones (center) and electrical engineering and computer science professor Garrett Katz (second from right) examine the autonomous manufacturing robots in the Center for Advanced Semiconductor Manufacturing with Brandon Lyubarsky ’26.

Why AI Can’t Replace Computer Scientists

Engineering and computer science students are learning how to build the next generation of AI approaches that run responsibly, efficiently and ethically.
John Boccacino Dec. 10, 2025

When it comes to computer programming, AI is a valuable tool that can write, debug and optimize code on demand.

However, those tools don’t generate perfect code and can’t replace computer science professionals who possess the critical thinking and understanding of algorithms and complex system architecture needed to write effective code, says , the Klaus Schroder Endowed Professor for Engineering and the electrical engineering and computer science department chair in the .

Professional headshot of Íű±ŹĂĆ administrator in navy windowpane suit and orange tie against blurred campus background.
Alex Jones

“Students can use AI tools to help them generate code structures and skeletons, but that’s not a replacement for understanding the foundations of computer science and troubleshooting the issues you encounter,” says Jones, a fellow of the Institute of Electrical and Electronics Engineers.

Recently, Jones helped secure $4.5 million in research funding in AI hardware acceleration, semiconductor design and workforce development. Enhanced hardware resources, combined with cutting-edge AI research on campus, set students up for success through access to industry-scale and industry-grade large language models, foundation models and other types of AI being developed, Jones says.

“We are constantly trying to find ways to integrate new ideas into the courses that we offer, while looking at ways that we can offer relevant and topical content with a technical depth that makes it useful in the field,” Jones says. “Our programs immerse students in the different forms of AI, looking at the AI approaches and the types of hardware designs that are important to run these efficiently.”

Jones sat down with SU Today to discuss how Syracuse’s approach prepares students to not just use AI, but to build the next generation of AI breakthroughs.

Q:
How do our degree programs help graduates tackle the challenges presented by AI?
A:

Our goal is to help educate software scientists and hardware engineers on what AI is, the many types of AI approaches out there and how they can be used properly and efficiently. There are challenges anytime you have a technology that has developed fast, where you’re constantly pushing the envelope of what it can do.

Our graduates are equipped to help identify and shape how this technology can move forward responsibly, efficiently and ethically, and they can be part of building the next generation of AI approaches. There’s a huge opportunity to make improvements to these AI tools, to make them more efficient and able to solve problems they can’t currently solve, without having to absorb as many resources as they currently do.

Q:
What are some of the foundational skills that will make our students uniquely positioned to work in these improved AI systems?
A:

We have classes that talk about the different forms of AI, everything from natural language processing to deep language learning and agentic AI. We’re teaching students how to write programs using open AI and other application programming interfaces (APIs). Then there’s understanding algorithms, discrete mathematics and finite automata. These are all skills that are not specifically related to AI but are part of the computer science theory background that are helpful and important when you want to write effective software.

Hands typing on laptop with holographic visualization of colorful data streams, binary code, and flowing network lines representing artificial intelligence and machine learning processes.

Our students understand questions like what does it mean to have something with this kind of complexity? When is it OK to use this? How do I parallelize something without increasing its complexity? Those are foundational computer science concepts that go beyond basic Python programming.

Q:
How do we prepare our students to be nimble in an ever-evolving industry?
A:

Computer science has always operated that way. If you look at Moore’s Law—the speed and capability of computers can be expected to double every two years—that’s growth at an exponential rate. So, how do our students live on an exponential curve? How do they take advantage of exponential technologies? They learn the underlying principles of the skills today so they can use continuous learning and education to stay current with the latest technologies. That’s what will make you a successful computer scientist.

We also keep a lot of the same disciplines under the same roof. Electrical engineers can easily take computer science classes. There’s so much richness in the availability of classes. If you have an interest, you can customize what you study to make yourself a unique and sought-after graduate, and that differentiates Syracuse from other places.

The post Why AI Can’t Replace Computer Scientists appeared first on Íű±ŹĂĆ Today.

]]>
Professor and students examine collaborative robotic arms working with electric vehicle battery components in advanced manufacturing laboratory at Íű±ŹĂĆ.
Open Source Program Office Secures $719K Grant /2025/12/03/open-source-program-office-secures-719k-grant/ Wed, 03 Dec 2025 21:44:14 +0000 /?p=329755 Funding from the Sloan Foundation ensures OSPO can establish a lasting campus presence, integrating open-source development into academics and research.

The post Open Source Program Office Secures $719K Grant appeared first on Íű±ŹĂĆ Today.

]]>

Open Source Program Office Secures $719K Grant

Funding from the Sloan Foundation ensures OSPO can establish a lasting campus presence, integrating open-source development into academics and research.
Wendy S. Loughlin Dec. 3, 2025

The University’s (OSPO) has received a two-year, $719,330 grant from the to transition from a grant-funded initiative to a sustainable, permanent University institution.

“This grant represents a critical milestone in our journey to make the OSPO a permanent part of Íű±ŹĂĆ,” says director Collin Capano. “Over the next two years, we’ll be working to establish sustainable funding mechanisms and integrate open-source development more deeply into our academic curriculum, ensuring that OSPO continues to serve our community long after Sloan Foundation funding ends.”

OSPO, a joint initiative of the and the , serves as a bridge between academic research and open-source software development, helping faculty across disciplines create, maintain and share research software while providing students with hands-on experience in collaborative software development.

Since its founding in 2023, OSPO has supported projects spanning fields from psychology and political science to physics and finance and engaged students in developing tools that advance both research and student career readiness, according to Capano.

The renewal grant will enable OSPO to expand impact through several key initiatives. OSPO’s successful software development program will be transformed into a dual-track system: an academic course allowing students to earn credit while working on faculty research projects, and a paid internship program focused on OSPO-led initiatives.

“These projects address University needs while positioning students at the forefront of emerging technologies, particularly artificial intelligence,” Capano says. “Students who participate in our programs graduate with more than just course credits—they have public portfolios showcasing real contributions to actual projects. This demonstrated experience with industry-standard tools gives our graduates a significant competitive advantage.”

OSPO has already developed several innovative tools for the Syracuse community, including a data storage finder that helps faculty identify and budget for research data storage solutions. Projects currently under development include an AI-powered research chatbot trained on papers published by Íű±ŹĂĆ faculty and an AI-based preprint server alert system that monitors new research publications and delivers personalized summaries to faculty based on their interests.

OSPO also addresses a critical federal mandate requiring all federally funded research to be publicly accessible, as academic institutions must provide infrastructure and expertise to support open science practices.

During the grant period, OSPO staff will conduct a formal evaluation of possible institutional homes for the program and document findings in a comprehensive playbook to guide other universities developing open-source programs.

OSPO will also expand its educational offerings through a series of microcredentialed workshops covering research computing fundamentals and open-source development practices, with materials made freely available to students from any discipline.

“The integration of open-source and AI development into the curriculum enhances our students’ employability while strengthening the University’s research capacity,” Capano said. “We’re creating a model that other academic institutions can adopt and adapt for their own communities.”

In addition to Capano, OSPO co-principal investigators are , vice president for research; , dean of University Libraries; and , associate vice president for information technology and chief technology officer.

OSPO was established with seed funding from the Sloan Foundation; the renewal grant will support the program through October 2027.

The post Open Source Program Office Secures $719K Grant appeared first on Íű±ŹĂĆ Today.

]]>
Aerial view of Íű±ŹĂĆ campus in winter, with snow-covered buildings, trees, and walkways along the main promenade
Connecting the Orange Network /2025/11/26/connecting-the-orange-network/ Wed, 26 Nov 2025 13:51:02 +0000 /?p=329502 Ask Orange Alumni, an AI-powered networking tool, helps transforms student-alumni relationships.

The post Connecting the Orange Network appeared first on Íű±ŹĂĆ Today.

]]>

Connecting the Orange Network

Ask Orange Alumni, an AI-powered networking tool, helps transforms student-alumni relationships.
Nov. 26, 2025

When Zahra Jarrell ’29, a public relations major in the , had questions about breaking into sports administration and sports media, she wasn’t sure where to turn.

That’s when she discovered , the Íű±ŹĂĆ Alumni Association’s artificial intelligence (AI)-powered networking tool that’s modernizing how the Orange community connects.

Breaking Down Barriers to Mentorship

Ask Orange Alumni graphic with Íű±ŹĂĆ Block S and wordmark

Ask Orange Alumni eliminates traditional networking hurdles. Students and alumni visit the website, submit questions and let AI work behind-the-scenes to find alumni matches. Faculty and staff members can also use the tool to identify guest speakers and bring themselves up-to-date on current industry trends.

For Jarrell, this process opened unexpected doors.

“I spoke to many alumni from different class years who currently work in sports administration and sports media,” she says. “We talked about career paths in sports administration and PR, the importance of networking and they offered insightful advice about navigating opportunities as a student and building professional connections early on.”

The Power of Giving Back

Alumni like Jeffrey Wells G’23, who earned an MBA from the , have found responding to fellow Orange community members fulfilling.

“The experience has been very rewarding,” Wells says. “The feedback I have received from alumni is that they not only value but also appreciate my input. I am grateful that I have had a chance to help other Syracuse alumni.”

Building Confidence and Community

The impact goes beyond simple question-and-answer exchanges. For Jarrell, connecting with multiple alumni provided confidence and direction.

“It was such a valuable experience, giving me a clearer sense of direction and making me feel more confident about pursuing my goals at Syracuse and beyond.”

Wells sees the bigger picture: “Ask Orange Alumni is a great idea for a forum where Orange alumni can help other Orange alumni and students. I hope that it continues to grow.”

How to Get Involved

Whether you’re a current student seeking guidance, a recent graduate navigating early career decisions or an established alum ready to give back, Ask Orange Alumni makes connections effortless.

For students and alumni seeking advice: Visit , submit questions and receive personalized responses from alumni who’ve been in your shoes. Be sure to say thank you to the alumni who step up to help!

For alumni ready to help: for better matching, respond to questions that align with your expertise and choose how you’d like to help—answer in the system, offer virtual meetings or suggest in-person connections.

Join the Growing Orange Network

Ask Orange Alumni is a bridge connecting generations of Syracuse graduates and current students, fostering authentic relationships that define the Orange experience.

Ready to be part of this story? Visit and discover how Orange connections can transform your career journey.

Story by the Office of Alumni and Constituent Engagement

The post Connecting the Orange Network appeared first on Íű±ŹĂĆ Today.

]]>
Person sitting at a desk waving during a video call on a laptop, with a notebook and pen in front and a bright indoor background