Planetary applications increasingly use geo-replication for reducing user-perceived latency, highlighting the tension between performance and consistency in these global systems. We have proposed methods to make systems fast when possible, while resorting to stronger consistency levels only when necessary for meeting user expectations. We have additionally shown how to enforce strong global guarantees fast by combining programming languages and systems techniques. These works have been presented in leading systems and programming language conferences, such as OSDI, POPL and EuroSys.
It is hard to ensure that complex software combining data, functionality, and multitenancy (e.g. web applications) is trustworthy, and efforts are being made to empower developers with automated verification techniques making sure that no security flaws are introduced at development time, as is often the case. We are developing language-based techniques to tackle this issue, focusing on privacy and security, and live evolution, in complex scenarios where security levels dynamically depend on data being manipulated. Our results include novel static analyses for privacy and confidentiality in data centric apps. We have also recently introduced results on type based information flow analysis for dynamic data dependencies, leading to a prototype publicly available at Microsoft Research Rise4Fun. Results have led to a US patent with OutSystems SA on model-driven security, to several PhD and MSc thesis, with fundamental results being published at top conferences such as POPL and ESOP.
Global-scale Internet services running in cloud computing infrastructures require high data availability despite possible network partitions, which often requires data storage systems to relax consistency guarantees. With INRIA we introduced a principled approach for designing systems based on weak consistency: conflict-free replicated data types (CRDT). CRDTs have been integrated in a leading no-SQL database (Riak) and in multiple products running in production serving millions of users. This work has received a Google research award, is currently supported by EU project SyncFree, and developed technologies adopted by companies such as Basho, Bet365 and TomTom.
A key challenge posed to the Semantic Web stack, identified by the W3C, is how to combine ontology languages for describing concepts and roles, such as the Ontology Web Language OWL, with rule languages, as in the Rule Interchange Format RIF. We introduced a new semantics for KBs combining rules and ontologies, so-called Hybrid Knowledge Base, with much better complexity results than the prior state-of-the art, and devised query-driven procedures for such semantics. We also developed the associated tool NoHR, currently the only reasoner available combining OWL ontologies with rules, and which has been integrated in the commonly used Protégé platform (Stanford).
Our work on Requirements Engineering spans from requirements identification techniques to the evaluation of their specifications. We focus on novel software modularity techniques and the volatile nature of requirements to support evolution, on the role of organisational objectives to express intentional goals with impact on stakeholders’ expectations, and on the quality of requirements and software models in terms of correctness, complexity, completeness and understandability. Results from this research have been published in top conferences (e.g. RE, CAiSE) and journals (e.g. REJ, SQJ, Software: P&E). Our work on Quality Models won a Best Paper Award at CAiSE in 2014. Also, we played a key role in the development of the AOSD and Modularity community since 2002. Our highly cited seminal work on aspect-oriented composition of software properties won the Most Influential Paper Award at the Modularity conference in 2013.
Concurrency and communication are pervasive in modern software, but development tools still fail to support them, causing failures in protocol and service compliance or timing errors in resource usage and calling for novel programming abstractions and analysis methods. We introduced “Conversation types”, allowing multiparty interactions to be type-checked for deadlock-freedom and protocol compliance, leading to high impact publications. We have also developed type-based techniques to certify interface contracts, and analyse the behaviour of distributed transactions. Our works on deadlock-free session protocols is considered a seminal work on linear and session types, being the most cited CONCUR paper since 2010. We also developed new techniques to analyse concurrent programs manipulating shared state, based on behavioural types. This research has been developed in collaboration with CMU, INRIA, Imperial College, and Glasgow, and has led to publicly available prototypes, and highly visible publications in top venues such as POPL, ESOP, ECOOP, Inf&Comp, and ACM Surveys.
We have developed innovative interactive multimedia solutions, in collaboration with domain users, in areas such as cultural heritage, contemporary dance, medieval illuminations, modern art, and new media art, and partnerships with Cultursinta, Berardo Museum, Joana Vasconcelos, Gulbenkian Foundation, CML, CADA and Rui Horta/Espaço do Tempo. A project in this activity, TKB, contributed to an ERC starting grant (Carla Fernandes, FCSH UNL) and further research is also being carried out in the scope of the H2020 Cognitus Project. Results were published in key conferences and journals (ACM JOCCH, ACM CHI, ACM Multimedia).
Within project ISTRION, we have been developing phrase-based statistical machine translation through supervision. As we incorporate human validation, our systems are able to rapidly improve performance with time and obtain better translation quality than MOSES (a state-of-the-art SMT system) when trained on the same parallel text collection. Our translation quality is 14 BLEU points higher, mean value, than MOSES, for 16 directions among Portuguese, English, French, Spanish and German. Our results were spun off by a recently founded start-up named ISTRION BOX.
The number of processing units in each computer has been increasing in recent years, with one or more multi-core processors and GPU units per machine. This allows programs to leverage on multiple threads of activity, but also challenges reliability. We have been studying abstractions for simplifying concurrent programming, such as transactional memory (TM), and data-centric concurrency control and skeleton-based programs; designing techniques for optimising concurrency-aware runtimes and to broaden the use of TM in various contexts; and proposing novel static and dynamic analysis techniques to enforce correctness of concurrent programs. Our work includes collaborations with the IBM Haifa Research Lab, the University of Brno, and EU COST actions Euro-TM and ARVI. Recent results were published in key conferences (e.g. ECOOP, Euro-Par, PDP) and journals (e.g. TACO, CCPE). Two of our publications were awarded as best conference papers.
We developed effective and competitive methods/algorithms to analyse and retrieve information from large data sources in diverse scenarios. Within ImTV project we researched new ways of making media experiences more personal and social, e.g., making the TV aware of users mood. This research was awarded a Fraunhofer Challenge Prize. We created a framework for multimedia information processing based on rank fusion and cross-stream analysis techniques. Our results got excellent scores in TREC2013 and CLEF2013: we were first in CLEF Multimodal Medical Search task, and among the 10 first out of 60 submissions on TREC Real-time search (Microblog). We are currently launching collaborations with Carnegie Mellon and BBC on multimedia search and interactive content support systems.
Developing methods to update KBs storing objects such as business rules, regulations or contracts is considered a key challenge (in particular in the presence of conflicting and dynamic updates) which we have addressed using principled approaches to rule-based KB updates. After pinpointing inescapable limitations of other approaches, we introduced a novel, more suitable, foundation for rule updates and solved the long-standing problem of state condensing for rule update semantics, leading to the first practical update operators for KBs combining ontologies and rules, with results published in IJCAI and KR, leading AI conferences.
Designing appropriate simulation and analysis techniques for scientific and engineering disciplines often requires the development of novel computational and visualization tools and methods. Under an ESA research contract on the validation of the Sentinel-3 satellite (monitoring marine environment in coastal zones) we developed parallel Monte-Carlo simulations of radiation transfer. Together with CENIMAT researchers (Materials Science) we developed TOMOgpu, a tool and environment leveraging on GPUs for the characterization of reinforcement population in composite materials, already adopted in unforeseen scenarios, such as paleontology (referred in Nature). Classes of problems arising in some disciplines are complex and combinatory, in particular when discrete domains are involved. We have investigated new probabilistic constraint solving algorithms, relevant for informing decision making in inverse engineering and reliability. A main achievement is the development of CaSPER, a constraint solving tool suite including advanced and efficient propagation techniques.
Software developers thrive to reduce accidental complexity in design and construction, while coping with change and fast deployment. We introduced the first model-driven development (MDD) approach for Early Aspects, new model transformation tools, and consistency checking and validation techniques. Our work on Requirements Engineering with MDD to evaluate model quality resulted in best paper awards (CAiSE) and prototype tools. Additionally, we developed new principles for MDD, including model transformations tools and languages, and have shown how MDD can help supporting portability of applications for the Cloud, with results published in major forums, including IEEE TSE. More recently we have been exploring experimental software engineering techniques to evaluate Software Product Lines approaches, to build catalogues of techniques, operationalization approaches, and algorithms for reuse, with results published in major events and journals, including the top-ten more cited paper on traceability published in SoSyM.
Existing techniques for handling geo-referenced information do not conveniently explore interaction with multiple information sources in context to better support collaborative awareness. We have thus developed generic tools for the creation of collaborative geo-aware context dependent platforms. Examples include Hidralerta: Collaborative Platform for the Integrative evaluation of short-term forecasting data in coastal and port areas; LiveTeams: enabling real-time evaluation of emergency situations and rescue in the field; Lx Conventos: evaluating the impact of the extinction of Religious orders in the urban landscape of XIX Century Lisbon; and The Portuguese Atlas of Literary Landscapes: query, visualization and interaction with literary geographies, in several dimensions and according with the user’s geographic context. These projects have been developed in collaboration with several relevant external partners, including LNEC, CML, National Archives/Torre do Tombo, ANPC, and FCSH.
The social web continues to explode, but existing online interaction tools (e.g. blogs, forums, news and media sites) tend to discourage debate, and may even facilitate low-quality argument and thinking. We are researching how to semantically represent, structure, evaluate and relate multiple argumentative viewpoints, integrated with linked data. We introduced a general model for argumentation and voting in the social web, providing an infrastructure for more open, flexible debates. Based on evolutionary game theory, we showed how actors/users can benefit from recognizing intentions and commitments of other actors/users, leading to a stable increase in cooperation compared to extant best strategies, with results published at Nature SR, IJCAI and AAMAS.
An increasing number of applications is running and storing their data in public clouds or outsourced data-centres. This poses new challenges regarding availability, security and privacy of data contents, as well as, data-processing. We are devising models, methods, techniques and tools for providing both privacy-preserving data storage and computations, with privacy-enhanced data-access and information retrieval methods, allowing indexing and searching methods running directly over always encrypted data stores, not compromising the expected privacy guarantees. This work has been supported by an Amazon grant and it has been presented in SRDS, a top venue in information security.
Computational systems are pervasive in health care and well-being activities and NOVA LINCS carries research in this area in multiple integrated topics. Advanced personal interfaces for health information management are a key enabler for patient participation and involvement. Data management considering privacy and security issues as well as system reliability in process workflows is another aspect where NOVA LINCS is active. Processing large amounts of multimodal data including images and sensor signals enables new computer aided therapy and situated learning using previous knowledge. Other aspects including gaming and wellbeing applications to promote healthy lifestyles and aid in rehabilitation processes. Current and past CMU-Portugal and UT Austin-Portugal funded projects support these ongoing research activities.