The Future Internet will include a large number of internet-connected sensors (including cameras and microphone arrays), which provide opportunities for searching and analyzing large amounts of multimedia data from the physical world, while also integrating them into added-value applications. Despite the emergence of techniques for searching physical world multimedia (including the proliferation of participatory sensing applications), existing multimedia search solutions do not provide effective search over arbitrary large and diverse sources of multimedia data derived from the physical world.\nSMART will introduce a holistic open source web-scale multimedia search framework for multimedia data stemming from the physical world. To this end, SMART will develop a scalable search and retrieval architecture for multimedia data, along with intelligent techniques for real-time processing, search and retrieval of physical world multimedia. The SMART framework will boost scalability in both functional and business terms, while being extensible in terms of sensors and multimedia data processing algorithms. The SMART framework will enable answering of queries based on the intelligent collection and combination of sensor generated multimedia data, using sensors and perceptual (A/V) signal processing algorithms that match the application context at hand. This matching will be based on the sensors' context and metadata (e.g., location, state, capabilities), as well as on the dynamic context of the physical world as the later is perceived by processing algorithms (such as face detectors, person trackers, classifiers of acoustic events and components for crowd analysis). At the same time, SMART will be able to leverage Web2.0 social networks information in order to facilitate social queries on physical world multimedia. The main components of the SMART search framework will be implemented as open source software over the Terrier (terrier.org) open source engine.