Rob Hamilton

Associate Professor, Department Head

About

Associate Professor of Music and Media, Rob Hamilton joined the Department of Arts at Rensselaer with an appointment in the Games and Simulation Arts and Sciences program in 2015. As a composer, performer, researcher and software designer his creative and analytical practice explores the cognitive implications of the spaces between interactive game environments, network topographies and procedurally-generated sound and music.

He holds a Ph.D. in Computer-based Music Theory and Acoustics as well as a M.A. in Music, Science and Technology from Stanford University’s Center for Computer Research in Music and Acoustics (CCRMA) in the Department of Music, a M.M. in Computer Music Composition from the Peabody Institute of the Johns Hopkins University, and a B.A. in Music and Cognitive Science from Dartmouth College. 

His music, writings and research have been presented at international academic conferences, in concert at festivals and galleries, as well as at commercial venues and trade shows including SIGGRAPH, AES, IEEE VR, GDC, ACH CHI, ISEA, NIME, SMC, ICAD, Ircam Forum, CMMR, SEAMUS, NACVGM and ICMC, and published in journals such as IEEE Access, JNMR, Leonardo Music, JAES, JMUI, and Organised Sound.

Dr. Hamilton currently sits on the Board of Directors for the Electronic Music Foundation Institute (EMFi). He has previously held seats on the Board of Directors for organizations such as the International Computer Music Association (ICMA) and the International Community for Auditory Design (ICAD).

 

Education & Training

Ph.D., Computer-based Music Theory and Acoustics, Stanford University, Center for Computer Research in Music and Acoustics (CCRMA)

M.A., Music, Science and Technology, Stanford University, Center for Computer Research in Music and Acoustics (CCRMA)

M.M., Computer Music Composition, The Peabody Institute of the Johns Hopkins University

B.A., Music and Cognitive Science, Dartmouth College

 

Other affililations: Arts, GSAS

Research

My current research interests encompass elements of composition, electronic music, human-computer interaction, cognition, artificial intelligence, sonification, network performance, gaming and virtual reality. In short, I'm interested in digging deeper into the ways in which we create, control, listen to and understand sound and music. 

Current research projects include:

man·tra (2023-present)

keywords: machine learning, artificial intelligence, spatial audio, generative ai

man·tra is a reactive performance environment that leverages the power of generative AI to re-compose the musical intention and reaction of performers and audience members alike. Set within a multi-channel immersive sonic space, man·tra is a polyvalent experience, capable of running as an autonomous installation, a live concert performance or as an interactive space where the audience itself informs the work. 

At its core, man·tra  uses a bespoke AI/ML model trained on newly composed and improvised rhythmic and melodic patterns or mantras to replicate musical interactions. Artists working with the system create repetitive patterns of beats, serving as a layer over which they record a series of short musical responses. This material trains the model, allowing it to learn and recreate the musical style and structure of the artists themselves. During a performance of man·tra the AI recomposition is spatialized across 64 channels of sound, continually remixing its own output into an immersive musical experience, either guided by the composer, by a live performer or by the audience themselves through the use of sensors tracking their movement through the space.

 

Coretet (2018-present)

keywords: extended reality, virtual reality, procedural audio, instrument design

Coretet is a virtual reality musical instrument that explores the translation of performance gesture and mechanic from traditional bowed string instruments into an inherently non-physical implementation. Built using the Unreal Engine 4 and Pure Data, Coretet offers musicians both a flexible and articulate musical instrument to playas well as a networked performance environment capable of supporting and presenting a traditional four-member string quartet. Building on traditional stringed instrument performance practices, Coretet was designed as a futuristic Twenty-First Century implementation of the core gestural and interaction modalities that generate musical sound in the violin, viola and cello. 

Coretet exists as a client-server software system designed to be controlled using an Oculus Quest 2 head-mounted display (HMD) and the Oculus Touch hand-tracking controllers. The instrument and performance environment are built using the Unreal Engine 5. Gesture and audio output is generated using interaction data from the engine streamed to a Pure Data (PD) server via Open Sound Control (OSC). Within PD, gestural control data from Coretet is processed and used to control a variety of audio generation and manipulation processes including the [bowed~] string physical model from the Synthesis Toolkit (STK).

Primary Research Focus
Music and Sound, Composition, Music Technology, Extended Reality, Music and Gaming
Other Focus Areas

Procedural audio and music; machine learning and AI for music and sound; instrument design and development; mobile music

Teaching

My current teaching focuses on hands-on applications of technology systems for the creation, control and understanding of sound and music.  

Current Courses

For AY '24-'25 I will be teaching one course per semester:

Fall '24: Designing Musical Games (ARTS 2961)
Spring '25: Music and Technology II (ARTS 4160)

 

Designing Musical Games (ARTS 2961)

Fall '24, Monday/Thursday 12:00 - 1:50 pm, SAGE LABS 2411

Syllabus: https://robertkhamilton.github.io/arts2961/

One of the most exciting areas of music technology development is happening in the realm of gaming and interactive virtual space. Music and Sound Design play crucial roles in the design of gaming environments, narratives and flow. And as designers create ever more innovative game experiences featuring rich graphics, fast multiplayer networking and next-generation controllers, new techniques for creating immersive music and sound for games to complement and showcase these advances are not only possible but necessary.

This Studio class will explore cutting edge techniques for building interactive sound and music systems for games and 2D/3D rendered environments. To better understand the link between virtual space and sound, students will learn the basics of designing sound and composing music for interactive game spaces by designing and implementing rich musical games within the Unity and Unreal gaming engines. Coursework will require the ability and desire to code game logic and design game environments. Techniques for integrating sound and music within games including game-centric middleware tools like FMOD and WWISE, interactive sound synthesis and computer networking using Open Sound Control may be explored.

Working in teams or on their own, students will design their own music-rich game experience, compose music, design sound material, and implement their own playable interactive musical game experiences.

 

Music and Technology II (ARTS 4160)

Spring '25, Tuesday/Friday 12:00 - 1:50 pm, SAGE LABS 2510

Syllabus: https://robertkhamilton.github.io/arts4160/

Music and Technology II assumes a knowledge and experience in using computer systems to create, manipulate and engage in research within the fields of sound design, electronic music, electroacoustic composition and performance. The course is directed to upper-level undergraduate and graduate students as a project-based seminar which will guide their progress through the design, researching of and implementation of an individual project within in the course of the semester. This class is divided between a group seminar, focusing on the presentation of the aesthetic/ theoretical/ technical/ historical issues related to the field, and a workshop/lab.

Topics will be tailored to current issues raised through individual and group interests including but not limited to software design for musical systems, sound design, musical composition, audio engineering topics, micro-processors and physical computing, computer-aided composition and generative music, musical interfaces, machine learning and artificial intelligence, sound spatialization, music in games and gaming environments, and theoretical research in music synthesis and composition. Each student will propose a musical project focus and scope of work for the semester, as well as a basis of the evaluation for individual work. In this way the hope is to combine the approaches of a music-focused research seminar, private study, guided research and group workshop/practicum.

Cultural and historical issues will be addressed through a series of student-led discussions of readings, listenings, videos, guest lectures and concerts. Students will also be asked to make use of the media collection at Folsom Library, as well as on-line access to media collections and journals such as the Computer Music Journal and Leonardo Music Journal.

The two-hour class sessions will generally be divided between classroom seminar presentation and workshop/lab. The final class session will be an informal presentation of individual projects completed in the class.

 

Interdisciplinary Research Seminar: Music and AI (ARTS 4880)

Spring '24, Tuesday/Friday 12:00 - 1:50 pm, SAGE LABS 2510

Syllabus: https://robertkhamilton.github.io/arts4880/

Advances in machine learning and artificially-intelligent systems are currently being applied to creative tasks such as musical composition and performance. This course is an advanced seminar focusing on the current state of AI research and application in musical domains. Topics to be discussed include the ethics of AI's use in music, the way(s) that trained systems are being applied towards the creation and performance of music and the hands-on application of AI/ML toolsets and frameworks for musical expression.

This is a course introducing music majors to advanced research topics of the Rensselaer music faculty. Each semester a member of the music faculty will focus the seminar on a research topic or paradigm related to their own body of artistic and technological research. Sample topics might include Spatial music and sound, New Instrument Design, Network Music, Music Information Retrieval, Ethnomusicology, Sonification Art and Science, Music and Logic, Spectralism and Beyond, Music Herstory (feminist music composition), Experimental music and sound history. Through hands-on creative research, students will explore questions of both musical and technological significance while engaging that same topic through their own hands-on creative practice.

Publications

My publications primarily address the uses of technology for musical creation and have been featured in books, journals and conference proceedings including SIGGRAPH, Audio Engineering Society (AES), IEEE VR, Game Developers Conference (GDC), ACM CHI, IEEE Access, Journal of New Music Research (JNMR), Leonardo Music, Journal of the Audio Engineering Society (JAES), Journal for Multimodal Interfaces (JMUI), Organised Sound, ISEA, NIME, SMC, ICAD, Ircam Forum, CMMR, SEAMUS, NACVGM and ICMC.

The following is a selection of recent publications in Scopus. Rob Hamilton has 36 indexed publications in the subjects of Computer Science, Arts and Humanities, Engineering.

Rob Hamilton
ICMC 2021 - Proceedings of the International Computer Music Conference 2021
, 2021
, pp.74-78
.
Luca Turchet, Rob Hamilton, Anil Camci
IEEE Access
, 9
, 2021
, pp.15810-15832
.
Cem Çakmakand, Rob Hamilton
AES: Journal of the Audio Engineering Society
, 68
, 2020
, pp.747-755
.
Anıl Çamcı, Rob Hamilton
Journal of New Music Research
, 49
, 2020
, pp.1-7
.
Rob Hamilton
Proceedings of the 2019 International Computer Music Conference, ICMC-NYCEMF 2019 - International Computer Music Conference New York City Electroacoustic Music Festival
, 2019
, pp.202-206
.
Rob Hamilton
26th IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2019 - Proceedings
, 2019
, pp.1510-1512
.
Rob Hamilton
26th IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2019 - Proceedings
, 2019
, pp.1305-1306
.
Rob Hamilton
Springer Series on Cultural Computing
, 2019
, pp.243-257
.
Ian Rios, Rob Hamilton
ICMC 2018 - Proceedings of the 2018 International Computer Music Conference
, 2018
, pp.304-308
.

View All Scopus Publications

Portfolio

Performance/Concert Works

Elegy (Ready, Set, Rapture) is a solo work for the Coretet extended-reality virtual double bass. Originally composed for bassist Jeremy Baguyos for performance at the 2019 International Society of Bassists conference in Bloomington, Indiana, Elegy (Ready, Set, Rapture) has been recently performed at Mise-En Festival (NYC), NIME 2023 (Mexico City), SEAMUS 2023 (NYC), Tenor 2023 (Boston), SMC 2022 (St. Etienne, France), Ecos Urbanos Festival (Mexico City), NYCEMF 2020 (NYC), Moxsonic Festival 2020 (MO) and ICMC 2020 (Santiago, Chile).
Back to top