Recent advancements in image editing applications such as relighting, compositing, harmonization, and virtual object insertion have opened up new horizons in visual media, augmented reality, and virtual production---especially with the rise of powerful image generative models. However, evaluating the quality of results for these applications is still a significant challenge. Traditional image quality metrics are not always effective in capturing the perceptual realism and subtle effects these technologies aim to achieve. Additionally, relying on user studies can be time-consuming and introduces variability, making comparing methods consistently challenging. To address these issues, this workshop explores and develops standardized evaluation metrics to bridge the gap between quantitative assessment and qualitative perception.
The UniLight workshop brings a platform to discuss and explore these topics:
David Forsyth is currently Fulton-Watson-Copp chair in computer science at UIUC. He has published over 170 papers on computer vision, computer graphics, and machine learning. His textbook, "Computer Vision: A Modern Approach" is widely adopted as course material.
Belen Masia is a tenured Associate Professor in the Computer Science Department at Universidad de Zaragoza, Spain, and a member of the Graphics & Imaging Lab. Her research focuses on the areas of material appearance modeling and virtual reality, with a focus on leveraging human perception to improve content creation tools and algorithms.
Manmohan Chandraker is a full professor in the CSE department of the University of California, San Diego. His interests are in computer vision and machine learning, with applications in self-driving and augmented reality.
Julien Philip is a Senior Research Scientist at Netflix Eyeline Studios working on computer graphics, vision, and machine learning. He is interested in neural rendering, multiview image editing, and relighting.
David Lindell is an Assistant Professor at University of Toronto and Vector Institute working on physically based intelligent sensing. He recently worked on inverse rendering and relighting, as well as novel ways to model light.
Ko Nishino is a professor at Kyoto University's Graduate School of Informatics, where he leads the Computer Vision Laboratory. His research is focused on establishing the theoretical foundations and efficient implementations of computational methods for better understanding people, objects and scenes from their appearance in images and video, as well as the development of novel computational imaging systems that can see beyond what we see.
Format and logistics: The workshop is held in person.
Our program features invited speakers, panel discussion, as well as invited and accepted papers.
Time | Event |
---|---|
1:00 PM – 1:10 PM | Welcome and Introductions |
1:10 PM – 2:50 PM | Invited Talks (25 mins x 4) |
2:50 PM – 3:20 PM | Contributed Talks (5 mins x 5) and Posters |
3:20 PM – 3:45 PM | Break |
3:45 PM – 5:00 PM | Invited Talks (25 mins x 3) |
5:00 PM – 5:45 PM | Panel |
We welcome submissions related to lighting and its perception across various contexts. Both novel research and previously published work are acceptable. Submissions may take the form of posters, extended abstracts or full papers. Concurrent work is also encouraged to foster discussion across disciplines. All submissions will be reviewed for relevance by our organizing committee. Please note that this workshop will not produce formal proceedings.
Submissions are handled through OpenReview at this link. We accept the following submission in two tracks:
The submissions of both tracks will be reviewed for relevancy and soundness.
Published work can be of any format. Novel work can be of either of these formats:
All submissions should be in PDF format and follow the ICCV 2025 formatting guidelines. The accepted submissions will be presented by the authors at the workshop.
Submission Deadline | August 19th, 2025 |
Notification of Acceptance | September 19th, 2025 |
Workshop Date | October 19/20th, 2025 |
We would like to thank the following organizations for their support: