Call for Papers

Call for Full and Short Papers

Paper submission system

The paper submission system (easyChair) for long and short technical papers is:
https://www.easychair.org/conferences/?conf=acmmm2013

The Workshops system will be available soon. However, the following programs do not use the system for submission. Please contact the corresponding chairs directly. See the corresponding calls for details.

  • Tutorials
  • Panel proposals
Remember that all deadlines due 23:59 (UTC-11) on the indicated dates.
For more information, please visit the Submissions Instructions.

 

Overview

ACM Multimedia 2013 calls for full papers presenting interesting recent results or novel ideas in all areas of multimedia and its applications. At the same time, the conference calls for short papers presenting interesting and exciting recent results or novel thought-provoking ideas that are not quite ready, and preferably include a system demonstration.

ACM Multimedia 2013 seeks contributions in the following 12 areas:

  1. Art, Entertainment, and Culture
  2. Authoring and Collaboration
  3. Crowdsourcing
  4. Media Transport and Delivery
  5. Mobile & Multi-device
  6. Multimedia Analysis
  7. Multimedia HCI
  8. Music & Audio
  9. Search, Browsing, and Discovery
  10. Security and Forensics
  11. Social Media & Presence
  12. Systems and Middleware

When you submit your paper, you will designate both a primary area and a secondary area in the online system. It is your responsibility to read the descriptions of all the areas and select the areas that best match the focus of your submission.

The submitted papers will be reviewed by anonymous reviewers with the help of anonymous area chairs as meta-reviewers. ACM Multimedia review is double-blind. Papers which violate this rule will be rejected without review.

ACM Multimedia 2013 employs a rebuttal: after revealing the initial reviews, the authors can optionally submit rebuttals if desired to do so. Area chairs and reviewers will take rebuttals into account for the final decision. Authors are allowed to submit maximum 4,000 characters for each rebuttal.

Accepted full papers will be allocated 10 pages each and will be presented either as oral or poster at the conference. Submitted full papers (but not accepted) will NOT be forwarded or accepted as short papers, unlike former years.

Accepted short papers will be allocated 4 pages each and will be presented as poster at the conference.

Accepted full and short papers will appear in the Conference Proceedings and in the ACM Digital Library.


1. Art, Entertainment and Culture

We solicit long and short papers describing the innovative use of digital technology in arts, entertainment and culture, to support the creation of multimedia content, artistic interactive and multimodal installations, the analysis of media consumption and user experience, or cultural preservation. Successful papers should achieve a balance between sophisticated technical content and artistic or cultural purpose.

Papers addressing entertainment applications should clearly advance the state-of-the-art in multimedia technologies and report original forms of media consumption, extending beyond current practice in digital entertainment and computer games. We welcome papers in all areas of multimedia and multimodal systems for art or cultural engagement involving video, computer graphics, and sound, characterized by innovative multimodal interaction and multimedia content processing.

For those papers describing fully-implemented systems, extensive user evaluation is not a strict condition for acceptance provided the levels of performance achieved can be clearly stated. On the other hand, papers centered on user experience should follow rigorous standards of evaluation. If you’re new to this style of publishing, please feel free to contact the chairs before the deadline for guidance.

We encourage authors to critically examine the artistic, technological and cultural implications and impact of their work, revealing challenges and opportunities of rich societal significance, including cross-fertilization
between art and multimedia.

We seek a broad range of integrated artistic and scientific statements describing digital systems for arts, entertainment, and culture including, but not limited to:

  • virtual and augmented reality artworks, including hybrid physical/digital installations
  • analysis of spectator experience in interactive systems or digitally-enhanced performances
  • creativity support tools
  • tools for or case studies on cultural preservation
  • computational aesthetics in multimedia and multimodal systems
  • models of interactivity specifically addressing arts and entertainment
  • active experience of multimedia artistic content by means of socio-mobile multimodal systems

2. Authoring and Collaboration

Creating, modifying and distributing integrated multimedia content are activities increasingly encountered not only in the workplace, but also in learning environments, cultural institutions, the home, and everyday mobile interactions. This growing context of use increases the diversity of end-users, broadens the scope of authoring purpose, and opens up rich opportunities for meaningful collaboration and coordination across multiple points of activity, from data capture to narrative composition, mediated conversational discourse to crowd sourced annotation.

The authoring and collaboration area encompasses research across integrated sites of inquiry: content capture and analysis, content browsing and selection, media composition and integration, collaborative authoring and distribution frameworks, annotation and curation, remediation, remixing and reuse, hybrid physical-digital content creation and representation, responsive audience systems. We encourage submissions demonstrating a broad range of human-machine/human-human authoring and collaboration contexts, including fully manual, mixed-initiative, fully automatic, and all other possible flavors of rich-media composition.


3. Crowdsourcing

Crowdsourcing makes use of human intelligence and a large pool of contributors to address problems that are difficult to solve using conventional computation. This new area cross-cuts traditional multimedia topics and solicits submissions dedicated to results and novel ideas in multimedia that are made possible by the crowd, i.e., they exploit crowdsourcing principles and techniques. Crowdsourcing is considered to encompass the use of: microtask marketplaces, games-with-a-purpose, collective intelligence and human computation.

Topics include, but are not limited to:

  • Exploiting crowdsourcing for multimedia generation, interpretation, sharing or retrieval
  • Learning from crowd-annotated or crowd-augmented multimedia data
  • Economics and incentive structures in multimedia crowdsourcing systems
  • Crowd-based design and evaluation of multimedia algorithms and systems
  • Crowdsourcing in multimedia systems and applications

Submissions should have both a clear focus on multimedia and also a critical dependency on crowdsourcing techniques.


4. Media Transport and Delivery

Media transport and sharing refer to the distribution of media data (such as text, still images, audio, animations, video, and interactivity content forms) across networks. We seek strong submissions that broadly explore media transport and sharing. This includes, but are not limited to:

  • Complete networked multimedia systems or applications that provide novel experience or improved performance over the state-of-the-art
  • Theoretical or experimental analysis of media transport and sharing mechanisms
  • Documented enhancements to one or more system components that involve media transport and sharing

Such individual components include, but are not limited to:

  • Networking, streaming, transport protocols or mechanism
  • Operating systems and storage
  • Overlay and peer-to-peer distribution
  • Synchronization of multi-modal data
  • Scalable media compression and coding

We encourage submissions that touch upon hot areas such as immersive systems, multi-modal sensor networks, office of the future, healthcare, virtual or augmented reality, 3D video or graphics, games, collaboration systems, social networks, peer-to-peer applications or systems, multimedia sharing services, Massive Open Online Courses (MOOC), mobile multimedia, and cloud-based systems.


5. Mobile & Multi-device

The proliferation of increasingly capable mobile devices opens up exciting possibilities for mobile multimedia. Mobile devices provide multimedia access, management and consumption virtually anywhere and anytime. But they are also enablers for novel forms of creativity and interaction with both physical and virtual worlds.

Mobile devices connect. They orchestrate seamless media authoring, interaction and sharing between people and devices. Even more so, they blend into existing device ecologies (e.g. those in a living room) and set the stage for promising multi-device applications with interactive multimedia.

The “Mobile and Multi-device” area solicits contributions that significantly advance the field of mobile multimedia. Papers may have a strong technical contribution, but we also welcome contributions with a strong human-centered focus, e.g. on the design or evaluation of novel interfaces and devices. Topics of interest include, but are not limited to:

  • Mobile multimedia search, sharing, indexing, and retrieval
  • Novel mobile interfaces and device concepts
  • Multi-device interfaces and technologies
  • Tangible multimedia, focusing on multi-device aspects in particular
  • Adaptive multimedia interfaces and technologies for multi-device interaction
  • Mobile interactive media editing, authoring, visualization, and browsing
  • Mobile augmented and mixed reality
  • Mobile devices as companion devices for multi-device interaction with multimedia
  • Mobile and multi-device applications (e.g. mobile video, e-health, assistive technologies, social interactive TV, and gaming)
  • Mobile multimedia in developing countries
  • Experiments, user studies, as well as field and ethnographic studies on mobile multimedia usage, interaction, navigation and sharing
  • Human-centered approaches to mobile and multi-device interaction with multimedia

6. Multimedia Analysis Description

Advances in multimedia analysis have helped enable us to capture, create, and consume multimedia information with unprecedented ease and frequency. In turn, the size of personal and shared multimedia collections and the availability of associated rich contextual and usage information are both growing quickly. Multimedia analysis must evolve to support interaction with substantial personal and shared multimedia collections, often across mobile and desktop environments. The increasingly multi-modal nature of multimedia data collections affords new opportunities for multimedia and cross-media analysis to progress and address the changing demands of multimedia consumers.

This track seeks submissions that contribute to continued progress in information extraction and processing from multimedia data. We actively encourage submissions that incorporate new modalities, sensors, and information sources into traditional multimedia analysis problems. Topics of interest include but are not restricted to:

  • Multimedia feature extraction
  • Semantic concept detection
  • Cross-media analysis
  • Multi-modal information processing and fusion
  • Temporal or structural analysis of multimedia data
  • Machine learning for multimedia analysis
  • Scalable processing and scalability issues in multimedia content analysis
  • Advanced descriptors and similarity metrics for multimedia data
  • Object recognition/detection/segmentation
  • 3D content analysis
  • Cross-camera content analysis

Submissions are particularly encouraged to combine multiple distinct information streams, sensors, contexts, or data sources in some element of the analysis.


7. Multimedia HCI

The term Multimedia-HCI denotes the intersection of two exciting and long standing domains of computer science research. This intersection suggests two categories of contributions to ACM MM13:

  1. HCI serves Multimedia”. In this category, we solicit contributions concerning the interaction with multimedia, including multimedia creation. This contribution type has a long tradition, e.g., with respect to video browsing. It covers all aspects of interaction with multimedia content, be it consumptive or manipulative, mobile or stationary (e.g. in large installations), direct or implicit; may it address individual users, collaborating teams, or in-the-large/mass interaction; may it concern singular content or large collections; etc. Interfaces based on novel devices (bendable OLEDs, pico projectors, etc.) and on tangible or ambient interaction are welcome. We also solicit contributions to novel interfaces for media creation; since this field is notoriously underrepresented, we particularly encourage submissions in this sub-area.
  2. Multimedia serves HCI” i.e. interaction concepts based on multimedia. Research from this contribution type concerns the use of multiple and time-based media as part of the user interface rather than as the content accessed. First-of-a-kind papers and fresh ideas are expected to dominate here over ‘gradual improvement papers’ – both are welcome though if originality is king. Some examples of possible submissions are: the dynamic use of time-dependent media in smart spaces or everyday appliances for conveying situations (output) and user intent/emotion (input) – or as an integral part of art objects, toys, etc.; the translation of ‘big data’ (in data analytics) into time-dependent output that can be direct-manipulated; multimodal immersive interfaces for complex time-critical tasks; etc.

In addition, we particularly solicit contributions on studies of multimedia systems in daily life. Multimedia systems are now frequently deployed in daily life, either through small-scale qualitative studies or at scale through publicly-released web or mobile systems. Understanding how these systems are integrated into everyday routines and used in contexts outside of the lab are exciting new areas for Multimedia HCI research. This contribution type can include qualitative and/or qualitative studies of multimedia use that demonstrate significant new understanding of how multimedia systems are used. All types of multimedia systems are welcome including video, tangible/ambient interfaces, and systems that combine multiple forms of media (e.g. text/video/audio) in new ways. The systems of study can be existing multimedia applications developed by others, or novel systems created by the authors.

Originality *and* maturity will be the major acceptance criteria. As to maturity, a standard paper has to demonstrate not only a working proof-of-concept but also a substantial and convincing evaluation. In case of a more ‘systems’ slated paper, the evaluation may concentrate on the system itself (measurements, simulations, etc.), but methodologically sound user studies should be expected for this HCI Area.


8. Music & Audio

For this new area, we seek strong technical submissions focused on audio in multimedia, novel usage of aural aspects for multimedia interfaces, and multimodal and non-audio perspectives on information usually considered to be in the audio domain. In multimedia items, the audio channel holds relevant, complementary information to information in other channels, such as the visual channel. Focusing on audio also opens up interesting possibilities for novel multimedia interfaces and user interactions. Furthermore, in practice, information relating to audio data can actually be multimodally encoded; for example, it may co-exist in symbolic form (e.g. closed captioning of speech) or in the form of sensor input and visual images (e.g. recordings of gestures in musical performances). Finally, contextual, social and affective aspects play important roles for this type of data: this can e.g. be seen in the consumption and enjoyment of music, and the sound design of cinematic productions.

We therefore wish to cast a broad net in defining what is appropriate for this track. Topics of interest include, but are not limited to:

  • Multimedia audio analysis and synthesis
  • Multimedia audio indexing, search and retrieval (at the document level and temporal fragment level)
  • Music and audio annotation, similarity measures and evaluation
  • Multimodal and multimedia approaches to music and audio
  • Multimodal and multimedia context models for music and audio
  • Computational approaches to music and audio inspired by other domains (e.g. musicology, psychology, computer vision, information retrieval)
  • Social data, user models and personalization in music and audio
  • Music, audio and aural aspects in multimedia user interfaces
  • New and interactive musical instruments, systems and other music/audio applications

Submissions should have a clear relation to multimedia: there either should be an explicit relation to multimedia items, applications or systems, or an application of a multimedia perspective, in which information sources from different modalities are considered.


9. Search, Browsing and Discovery

Recent years have witnessed an explosive proliferation of multimedia content. This
huge and ever growing volume of multimedia information leads to “information
overload” and poses a compelling demand for effective and efficient access to
multimedia content at a very large scale, and with an effective and rewarding user
experience. A multimedia content access pipeline, assembling search, browsing and
discovery, is expected to provide information relevant to a user’s query and offer
user-friendly browsing and knowledge discovering functionalities. In addition to
search, discovering latent knowledge within search results and within collections, and
mining use out of search results by suggesting new content will lead to novel user
search experiences.

This area seeks contributions reporting novel problems, solutions, models, or theories
that tackle the issues of search, browsing and discovery, and the interplay between
those, over large-scale multimedia collections. We welcome submissions that
contribute to continued progress in large-scale search, submissions presenting user-
friendly browsing functionalities and interfaces, as well as those proposing effective
discovery solutions along content and social dimensions. Topics of interest include
but are not limited to:

  • Large-scale multimedia indexing, ranking, and re-ranking
  • Multimedia search system architecture and optimization
  • Interactive and collaborative search
  • User intention modeling, query suggestion, and feedback mechanisms
  • Summarization, visualization and organization of multimedia collections
  • Creative user interfaces and systems for multimedia browsing
  • Knowledge discovery from multimedia content
  • Innovative and emerging multimedia applications based on search, browsing, and discovery
  • Multimedia recommendation and filtering
  • Multimedia search in scientific, enterprise, social and medical domains

10. Security and Forensics

The research born thanks to the synergy between Multimedia, Security and Forensics is
exploding in terms of world-wide interest and potential results. Security and Forensics are two
very closed research field, often overlapped in terms of goal and methodologies and often
complementary in terms of needs. Security systems call from robust and efficient solutions for
process- possibly in real time- large amount of data, coming from sensors, other acquisition
systems, and different source to assess the knowledge about people, target of interests,
situations, environment, etc. Forensics tools need to work with the same, or possibly larger
amount of data, mainly to support the investigation in case of crime of dangerous situation and
to keep evidence in the analysis. Both fields need to substitute fully-manual procedure with,
automatic, or assisted procedures to improve efficiency, robustness and accuracy, Both fields
work on data that have more and more a multimedia nature, come from multimodal source,
and should exploit most of the results of the multimedia research community.
Thus most of the multimedia themes suitably tailored for security and forensics are welcome
such as:

  • 3D scene reconstruction for forensic and security applications
  • Data annotation, indexing and retrieval for forensic and security data
  • Evidence assessment
  • Event and situation assessment for forensic and security applications
  • File carving
  • Forgery and copy detection
  • Multimedia content analysis for forensic and security applications
  • Multimedia cyber security
  • Multimedia interfaces for forensic and security applications
  • Multimedia encryption
  • Mobile security and forensic analysis
  • Multimedia surveillance
  • Privacy in people identification and recognition
  • Privacy in people interaction analysis
  • Real-time processing of multimedia data for security
  • Reversible watermarking
  • Visual cryptography

The area of Security and Forensics, has been included in the ACM MM program since last
year, after some years of successful workshops. We encourage submissions that focus on the aforementioned topics applied to forensics and security contexts and many other emerging topics. Both new techniques and methodologies, specifically copying security and forensics problems, and novel systems and tools are welcome, as well as common dataset evaluations and suggestions for performance analysis.


11. Social Media & Presence

This area seeks novel contributions investigating social interactions around multimedia systems, streams, and collections. Sharing of multimedia objects constitutes a prime aspect of many online social systems today. While Facebook, Twitter, and Instagram enable individuals connect with their social audiences through images and videos, YouTube and Flickr support multimedia content creation and conversations around them. Newly emergent tool Pinterest enables how one’s hobbies and interests can be manifested through collections of multimedia content shared in a social context.

Our focus is capturing the collective activity of humans centered on such multimedia documents and leveraging this to better understand a number of topics. For instance: the meaning and utility of the documents themselves; the real-world items, locations, and events that they represent; the conversations that they engender between people; and the models of interactions between people and these social media systems. The papers in this area should look specifically at systems wherein users are socially sharing or consuming multimedia and either advance our understanding of these systems or leverage the created data to address open research questions in the multimedia community at large.

Topics of interest include:

  • Multimedia-enabled social sharing of information
  • Behavioral modeling in social multimedia systems
  • Event detection and analysis in social multimedia/social media collections
  • Evaluation of participation and engagement of individuals around shared media
  • Location-based social multimedia
  • Interaction and communication pattern modeling and analysis around social media
  • Network analysis in social multimedia systems
  • Data-driven modeling and analysis of temporal, spatial characteristics of social multimedia streams

12. Systems and Middleware

This area targets applications, mechanisms, algorithms, and tools that enable the design and development of efficient, robust, and scalable multimedia systems. In general, it includes solutions at various levels in the software and hardware stack. In particular, the area covers topics like efficient implementations of and processing frameworks for multimedia workloads running on both traditional hardware and co-processors like graphics processing units (GPUs), network processors and field-programmable gate arrays (FPGAs). Cloud-supported multimedia systems and peer to peer streaming systems are encouraged.

We are also interested in submissions that explore the design of architectures and software for mobile multimedia and multimedia in pervasive computing applications. This includes tools and middleware to build multimedia applications, like content adaptation and transcoding, stream processing, and cloud multimedia systems.

Finally, this area covers multimedia systems providing mixed-reality user experience, evaluating user experiences, and conducting case studies of Quality of Experience (QoE) for multimedia systems.


Submission Guidelines

Submissions to ACM Multimedia 2013 must include new, unpublished, original research. Submissions containing substantially similar material may not be submitted to other venues concurrently with ACM Multimedia 2013.

If duplicate submissions are identified during the review process, authors will not be permitted to submit papers to the ACM Multimedia conference in the following years.

All submissions must be written in English. They must contain no information identifying the author(s) or their organization(s).

Further instructions will be posted shortly.

Important Dates

Abstract (only for full papers): March 1, 2013
Manuscript for full/short papers: March 8, 2013
Initial reviews to authors (only for Full papers): May 8, 2013
Rebuttal (only for full papers): May 8-17, 2013
Author-to-Author’s Advocate contact period: May 8-13, 2013
Notification of Acceptance: June 25, 2013
Camera-ready submission: August 1, 2013

Contact

  • Daniel Gatica-Perez (IDIAP & EPFL, Switzerland)
  • David A. Shamma (Yahoo Labs, USA)
  • Marcel Worring (University of Amsterdam, The Netherlands)
  • Roger Zimmermann (National University of Singapore, Singapore)

Comments are closed.