{"id":2007,"date":"2013-07-25T14:51:31","date_gmt":"2013-07-25T14:51:31","guid":{"rendered":"http:\/\/www.brosig-koch.de\/?page_id=2007"},"modified":"2013-07-25T14:51:31","modified_gmt":"2013-07-25T14:51:31","slug":"demo-kurzbeitrage","status":"publish","type":"page","link":"https:\/\/muc2013.mensch-und-computer.de\/en\/mensch-computer\/programm\/demo-kurzbeitrage\/","title":{"rendered":"Inter | aktion (Demos)"},"content":{"rendered":"<p>Die Demos werden im Rahmen der Demo- und Postersession am Montag, 9.9.2013 von 15:30 Uhr bis 19:00 Uhr pr\u00e4sentiert. <\/p>\n<hr style=\"border:none;border-top:2px dotted;height:1px;color:#6eb1fc;background:transparent;margin-top:20px;margin-bottom:20px\" \/>\n<h2>A Social Sculpture for the Digital Age<\/h2>\n<p><b>Autor:<\/b> Dieter Meiller<br \/>\nHochschule Amberg-Weiden, Deutschland; <\/p>\n<p>Nearly 1300 residents of a city came together to create a sculpture. The sculpture has both a physical and a virtual presence. The physical part consists of a large sphere, split into two hemispheres, each large enough to walk in between and view from the inside. Each participant of this collaborative project designed wax plates that were then cast in bronze and mounted onto the sphere. The realization of the virtual counterpart of the physical creates a duality that changes the perspective how the observer perceives the art. It provides deeper insight into the intention of the artists\u2019 work and their relationship to each other in this social artwork.<\/p>\n<hr style=\"border:none;border-top:2px dotted;height:1px;color:#6eb1fc;background:transparent;margin-top:20px;margin-bottom:20px\" \/>\n<h2>DiTAG: Ein digital-analoges Brettspiel-Interface<\/h2>\n<p><b>Autoren:<\/b> Robin Krause, Marcel Haase, Benjamin Hatscher, Michael A. Herzog, Christine Goutri\u00e9<br \/>\nHS Magdeburg-Stendal, Deutschland; <\/p>\n<p>Das \u00bbDigital To Analog Gaming Board\u00ab (DiTAG) ist ein Interface zum Spielen und Entwickeln von Brettspielen. Mit Hilfe der RFID-Technologie schlie\u00dft es die L\u00fccke zwischen analoger und digitaler Spielwelt. Ein DiTAG-Spielbrett besteht aus einzelnen Bausteinen, die mit Leseger\u00e4ten sowie Transpondern ausgestattet sind und sich mittels Steckverbindungen zusammenf\u00fcgen lassen. Die einzelnen Bausteine des Brettes sind in der Lage angrenzende Teile zuerkennen und von diesen erkannt zu werden. Dieses Baukastensystem soll den Spielern die M\u00f6glichkeit geben Spielfiguren und Karten selbst mit RFID-Tags auszustatten und mit Hilfe eines einfachen Editors eigene Spielideen umzusetzen. Der realisierte erste Prototyp dient zur Entwicklung neuer Spielformate sowie damit einhergehender Interaktionsmuster und Gestaltungskonzepte im Wechselspiel analoger und digitaler Spielelemente.<\/p>\n<hr style=\"border:none;border-top:2px dotted;height:1px;color:#6eb1fc;background:transparent;margin-top:20px;margin-bottom:20px\" \/>\n<h2>Brain Painting: Action Paintings based on BCI-Input<\/h2>\n<p><b>Autoren:<\/b> Markus Funk, Michael Raschke<br \/>\nUniversit\u00e4t Stuttgart, Deutschland; <\/p>\n<p>We introduce RoboPix, a robot, which is able to paint Action Painting style pictures based on input of a Brain-Computer-Interface (BCI). The BCI provides signals, which encompass the user\u2019s recognized thoughts and the user\u2019s excitement. These signals are mapped to the movement of the robot\u2019s arm, which spreads the paint on the canvas. Our system combines explicit and implicit signals to personalize and affect the created painting. Furthermore, we implemented a feedback loop to engage the user in interacting with the system again after losing focus. This system creates a modern art representation of the user\u2019s excitement and thoughts at the moment of creation.<\/p>\n<hr style=\"border:none;border-top:2px dotted;height:1px;color:#6eb1fc;background:transparent;margin-top:20px;margin-bottom:20px\" \/>\n<h2>Rekonstruktion der Ersten Allgemeinen Deutschen Kunstausstellung Dresden 1946<\/h2>\n<p>\n<b>Autoren:<\/b> Konstantin Klamka, Thomas Schmalenberger<br \/>\nTechnische Universit\u00e4t Dresden, Deutschland; <\/p>\n<p>Technologische Entwicklungen er\u00f6ffnen dem Museum neue M\u00f6glichkeiten, Information in einer zeitgem\u00e4\u00dfen, erlebnisorientierten Form der \u00d6ffentlichkeit zu pr\u00e4sentieren und somit die Wissensvermittlung aktiv zu unterst\u00fctzen. Die kunsthistorische Notwendigkeit der Pr\u00e4sentation umfassender Zusammenh\u00e4nge motiviert zur Reflexion geeigneter Darstellungsformen. Anhand der Thematik der sehr bedeutsamen, zugleich jedoch wenig dokumentierten Ersten Allgemeinen Deutschen Kunstausstellung Dresden 1946, beschreibt diese Arbeit einen m\u00f6glichen Ansatz der virtuellen und interaktiven Rekonstruktion realer R\u00e4ume. Dazu wurde die gesamte Ausstellungssituation in umfangreicher und interdisziplin\u00e4rer Zusammenarbeit zu einem digitalen dreidimensionalen Modell zusammengefasst, das im Rahmen einer Kunstausstellung dem Besucher auf Multitouch-Monitoren die immersive Erfahrung der Raum- und Exponatsituation erm\u00f6glichte und zur interaktiven Exploration einlud.<\/p>\n<hr style=\"border:none;border-top:2px dotted;height:1px;color:#6eb1fc;background:transparent;margin-top:20px;margin-bottom:20px\" \/>\n<h2>tANGibLE: a Smart Tangible Home Controller<\/h2>\n<p><b>Autoren:<\/b> Mirko de Almeida Madeira Clemente, Martin Herrmann, Mandy Keck, Rainer Groh<br \/>\nTechnische Universit\u00e4t Dresden, Deutschland; <\/p>\n<p>Gadgets that are used in everyday life address a wide range of functionalities. At the same time we observe a trend toward more simple and natural user interfaces. In this paper we describe the object-centered design process of tANGibLE, which resulted in a smart tangible home controller with easily accessible functions and a high degree of joy of use.<\/p>\n<hr style=\"border:none;border-top:2px dotted;height:1px;color:#6eb1fc;background:transparent;margin-top:20px;margin-bottom:20px\" \/>\n<h2>Detecting of and Interacting with Text in Free Space<\/h2>\n<p><b>Autoren:<\/b> Frank Wippich, Christian Graf, Daniel Drewes<br \/>\nBlindsight Europe GmbH, Deutschland; <\/p>\n<p>Computer Vision technology has seen a significant push in popularity in consumer applications, whether for face recognition in today\u2019s smartphone camera apps, or server-based object recognition that identifies products, text or contexts in images within seconds. And thanks to powerful smartphone and tablet devices, computer vision application can now perform the processing locally, providing real-time feedback to the user, about the object they are looking at just by pointing the camera lens at it \u2013 without the need of a remote server.<br \/>\nHence, new ways of interacting with one\u2019s environment become possible in particular for people with specific access needs, like vision-impaired users. Blindsight\u2019s text detection algorithm showcases a way on how to detect and speak out text in virtually real time using a smartphone.<br \/>\nIn this demo, users will be able to try out text detection and immerse into a \u201cnon-visual\u201d user-experience showing the state of the art for text detection and assistive technology for the visually impaired. Based on the user\u2019s experiences, this demo shall also discuss challenges and new ways to interact with text in free space. How can technology guide a user to find the desired text, how can technology filter the right information or group the information based on context, conventions or user preferences?<br \/>\nProviding ideas and answers for these question will not only enrich the advancements of assistive technology, it will also inspire new application in a world where visual experiences are more and more supplemented by tactile, audible and vocal user interfaces.<\/p>\n<hr style=\"border:none;border-top:2px dotted;height:1px;color:#6eb1fc;background:transparent;margin-top:20px;margin-bottom:20px\" \/>\n<h2>Caruso &#8211; Singen wie ein Tenor<\/h2>\n<p><b>Autoren:<\/b> Jochen Feitsch, Marco Strobel, Christian Geiger<br \/>\nFH D\u00fcsseldorf, Deutschland; <\/p>\n<p>In diesem Beitrag beschreiben wir ein Projekt, dessen Ziel es ist dem Benutzer das Gef\u00fchl zu geben wie ein Tenor zu singen. Wir kombinieren 3D Ganzk\u00f6rpertracking mit Gesichtstracking, Morphing, Gesangssynthese und 3D Character Rendering in einer interaktiven Medieninstallation.<\/p>\n<hr style=\"border:none;border-top:2px dotted;height:1px;color:#6eb1fc;background:transparent;margin-top:20px;margin-bottom:20px\" \/>\n<h2>Designing Device-less Interaction \u2013 A Tracking Framework for Media Art and Design<\/h2>\n<p><b>Autorin:<\/b> Michaela Honauer<br \/>\nBauhaus Universit\u00e4t Weimar, Deutschland; <\/p>\n<p>This paper presents KinectA , a tracking application that uses depth sensing technologies like the Kinect Sensor. It helps to receive the basic information that are necessary for the interaction without any input devices and for the interrelated process of e.g. designing gesture-based interfaces. This software offers simultaneously hand, skeleton and object tracking. Particularly, media artists and designers can focus on their creative work due to the availability of these basic tracking functions. KinectA is available for Mac and Windows, and it communicates via OSC to other software or hardware.<\/p>\n<hr style=\"border:none;border-top:2px dotted;height:1px;color:#6eb1fc;background:transparent;margin-top:20px;margin-bottom:20px\" \/>\n<h2>Den Schrecken im Blick: Eye Tracking und Survival Horror-Spiele<\/h2>\n<p>\n<b>Autoren:<\/b> Martin Dechant, Markus Heckner, Christian Wolff<br \/>\nUniversit\u00e4t Regensburg, Deutschland; <\/p>\n<p>In dieser Arbeit wird der Prototyp des Survival Horror-Spiels Sophia vorgestellt: Dieses Spiel integriert Blickinteraktion \u00fcber einen station\u00e4ren Eyetracker in die Spieleengine Unity. Die Blickdaten werden dazu genutzt, die Spannung im Spiel zu steigern. Ferner wird betrachtet, inwiefern das Schlie\u00dfen der Augen als Interaktionsm\u00f6glichkeit in ein Survival-Horror-Spiel integriert werden kann.<\/p>\n<hr style=\"border:none;border-top:2px dotted;height:1px;color:#6eb1fc;background:transparent;margin-top:20px;margin-bottom:20px\" \/>\n<h2>Sportal: A First-Person Videogame turned Exergame<\/h2>\n<p><b>Autoren:<\/b> Benjamin Walther-Franks, Dirk Wenig, Jan Smeddinck, Rainer Malaka<br \/>\nUniversit\u00e4t Bremen, Deutschland; <\/p>\n<p>Digital exercise games (exergames) can motivate players to carry out physical exercises. However, most exergames are controlled by confined and predefined movements and do not promote long-term motivation, limiting player immersion. Well-funded commercial games often excel at long-term motivation, but are not operated with motion input. We choose the best of both worlds by turning an existing videogame without motion control into an exergame. By adding a NUI control and feedback overlay to the popular first-person action game Portal 2, and designing custom game levels around exercise regimens, we turned it into the exergame Sportal. This approach can give gamers an incentive to exercise using high quality first-person gameplay and it can potentially acquaint exercise-eager non- gamers with a popular videogame title.<br \/>\n<\/p>","protected":false},"excerpt":{"rendered":"<p>Die Demos werden im Rahmen der Demo- und Postersession am Montag, 9.9.2013 von 15:30 Uhr bis 19:00 Uhr pr\u00e4sentiert. A Social Sculpture for the Digital Age Autor: Dieter Meiller Hochschule Amberg-Weiden, Deutschland; Nearly 1300 residents of a city came together &hellip; <a href=\"https:\/\/muc2013.mensch-und-computer.de\/en\/mensch-computer\/programm\/demo-kurzbeitrage\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"parent":67,"menu_order":20,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-2007","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/muc2013.mensch-und-computer.de\/en\/wp-json\/wp\/v2\/pages\/2007","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/muc2013.mensch-und-computer.de\/en\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/muc2013.mensch-und-computer.de\/en\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/muc2013.mensch-und-computer.de\/en\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/muc2013.mensch-und-computer.de\/en\/wp-json\/wp\/v2\/comments?post=2007"}],"version-history":[{"count":0,"href":"https:\/\/muc2013.mensch-und-computer.de\/en\/wp-json\/wp\/v2\/pages\/2007\/revisions"}],"up":[{"embeddable":true,"href":"https:\/\/muc2013.mensch-und-computer.de\/en\/wp-json\/wp\/v2\/pages\/67"}],"wp:attachment":[{"href":"https:\/\/muc2013.mensch-und-computer.de\/en\/wp-json\/wp\/v2\/media?parent=2007"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}