WEBIST 2018 Abstracts


Area 1 - Internet Technology

Full Papers
Paper Nr: 29
Title:

Recommendation Systems in a Conversational Web

Authors:

Konstantinos N. Vavliakis, Maria Th. Kotouza, Andreas L. Symeonidis and Pericles A. Mitkas

Abstract: In this paper we redefine the concept of Conversation Web in the context of hyper-personalization. We argue that hyper-personalization in the WWW is only possible within a conversational web where websites and users continuously “discuss” (interact in any way). We present a modular system architecture for the conversational WWW, given that adapting to various user profiles and multivariate websites in terms of size and user traffic is necessary, especially in e-commerce. Obviously there cannot be a unique fit-to-all algorithm, but numerous complementary personalization algorithms and techniques are needed. In this context, we propose PRCW, a novel hybrid approach combining offline and online recommendations using RFMG, an extension of RFM modeling. We evaluate our approach against the results of a deep neural network in two datasets coming from different online retailers. Our evaluation indicates that a) the proposed approach outperforms current state-of-art methods in small-medium datasets and can improve performance in large datasets when combined with other methods, b) results can greatly vary in different datasets, depending on size and characteristics, thus locating the proper method for each dataset can be a rather complex task, and c) offline algorithms should be combined with online methods in order to get optimal results since offline algorithms tend to offer better performance but online algorithms are necessary for exploiting new users and trends that turn up.

Paper Nr: 58
Title:

IUPTIS: A Practical, Cache-resistant Fingerprinting Technique for Dynamic Webpages

Authors:

Mariano Di Martino, Pieter Robyns, Peter Quax and Wim Lamotte

Abstract: Webpage fingerprinting allows an adversary to infer the webpages visited by an end user over an encrypted channel by means of network traffic analysis. If such techniques are applied to websites that contain user profiles (e.g. booking platforms), they can be used for personal identification and pose a clear privacy threat. In this paper, a novel HTTPS webpage fingerprinting method - IUPTIS - is presented, which accomplishes precisely this, through identification and analysis of unique image sequences. It improves upon previous work by being able to fingerprint webpages containing dynamic rather than just static content, making it applicable to e.g. social network pages as well. At the same time, it is not hindered by the presence of caching and does not require knowledge of the specific browser being used. Several accuracy-increasing parameters are integrated that can be tuned according to the specifics of the adversary model and targeted online platform. To quantify the real-world applicability of the IUPTIS method, experiments have been conducted on two popular online platforms. Favorable results were achieved, with a F1 scores between of 82% and 98%, depending on the parameters used. This makes the method practically viable as a means for personal identification.

Short Papers
Paper Nr: 5
Title:

Web User Interface Implementation Technologies: An Underview

Authors:

Antero Taivalsaari, Tommi Mikkonen, Kari Systä and Cesare Pautasso

Abstract: Over the years, the World Wide Web has evolved from a document distribution environment into a rich development platform that can run compelling, full-fledged software applications. However, the programming capabilities of the web browser – designed originally for relatively simple scripting tasks – have evolved organically in a rather haphazard fashion. Consequently, there are many ways to build applications on the Web today. Depending on one’s viewpoint, current standards-compatible web browsers support three, four or even five built-in application rendering and programming models. In this paper, we provide an ”underview” of the built-in client-side web application UI implementation technologies, i.e., a summary of those rendering models that are built into the standards-compatible web browser out-of-the-box. While the dominance of the base HTML/CSS/JS technologies cannot be ignored, we foresee Web Components and WebGL gaining popularity as the world moves towards more complex and even richer web applications, including systems supporting virtual and augmented reality.

Paper Nr: 17
Title:

Constructing a Word Similarity Graph from Vector based Word Representation for Named Entity Recognition

Authors:

Miguel Feria, Juan Paolo Balbin and Francis Michael Bautista

Abstract: In this paper, we discuss a method for identifying a seed word that would best represent a class of named entities in a graphical representation of words and their similarities. Word networks, or word graphs, are representations of vectorized text where nodes are the words encountered in a corpus, and the weighted edges incident on the nodes represent how similar the words are to each other. Word networks are then divided into communities using the Louvain Method for community detection, then betweenness centrality of each node in each community is computed. The most central node in each community represents the most ideal candidate for a seed word of a named entity group which represents the community. Our results from our bilingual data set show that words with similar lexical content, from either language, belong to the same community.

Paper Nr: 20
Title:

Automatic Discovery and Selection of Services in Multi-PaaS Environments

Authors:

Rami Sellami and Stéphane Mouton

Abstract: Over the past couple of years, a new paradigm has emerged which is referred to as DevOps. It is a methodology to efficiently manage the relationship between development and operations in order to automate applications lifecycle. Spurred by its popularity, it is used today to manage applications in the PaaS level of the Cloud. However, it becomes very challenging when it comes to deploying an application in multi-PaaS environments. The first challenge is to discover and select services taking into account the application requirements and on the PaaS capabilities. Indeed, PaaS providers do not use the same mechanisms to describe and expose their services. Added to that, there is no standard way to describe application requirements. To tackle these anomalies, we propose an automatic and declarative approach to discover and select services offered by PaaS providers. It enables developers to express their requirements and PaaS providers to expose their offers in manifests. To do so, a matching algorithm selects the most appropriate offer in terms of PaaS capabilities to deploy the application. An offer may involve either a single or multi-PaaS provider(s). The key ingredients of our solution are threefold: (1) manifests to describe application requirements and the offers, (2) an ontology to remove semantic ambiguities in PaaS providers capabilities, and (3) a matching algorithm to elect the most appropriate offer to the application.The solution is proposed as a REST API and is delivered with a Web client.

Paper Nr: 21
Title:

Mining Developer Questions about Major Web Frameworks

Authors:

Zakaria Mehrab, Raquib Bin Yousuf, Ibrahim Asadullah Tahmid and Rifat Shahriyar

Abstract: Web frameworks are the de facto way to build web-enabled applications. Stack Overflow, being one of the leading question answering sites available, has become a helpful resource in numerous software engineering research. In this paper, we present a study of common challenges and issues among developers of two major web frameworks namely Laravel and Django by mining questions asked on Stack Overflow. We extracted the issues that the developers are most concerned about. We sorted these issues by popularity and difficulty metrics and observed the contrasting nature of difficulty and popularity. We also noted an exception that installation is a popular issue over both the frameworks and simultaneously it is also difficult to resolve. Besides, we found that about 50% issues are common over both the frameworks. Our findings would help the framework developers to understand better the need of the framework users by focusing most difficult and the most popular issues.

Paper Nr: 31
Title:

Predictive Analytical Framework based on Formal Method to Enhance Mobile and Pervasive Learning Experience

Authors:

Manel BenSassi, Mona Laroussi and Henda BenGhezala

Abstract: In this paper, we present a predictive analytical framework for mobile and ubiquitous learning environment based on three main dimensions: learner, contextualized activity and space. The main objective of this proposal is to assist pedagogical designer in developing engaging and effective courses by focusing on the learner’s experience and his learning environment. To do that, a solid structure is essential to organize correlations between different activities and to assess their reliability with the learner’s context. The strengths of our proposal lie in the fact that, in a formal manner through friendly graphical interfaces, it allows pedagogical designers:(1) to specify, model, simulate, analyse and verify different types of context-aware and adaptive learning activities and their related contexts, (2) to assess the reliability of the indoor and outdoor learning spaces within pervasive environment through factual cases and to experiment various learning scenarios, (3) To simulate and to verify interactions and co-adaptability rules between learner, contextualized activity and space.

Paper Nr: 43
Title:

Issues and Challenges of Access Control in the Cloud

Authors:

Francesca Lonetti and Eda Marchetti

Abstract: Cloud computing offers scalable and efficient information sharing and storage resources. However, the risk of security breaches for personal and private data in this computing environment is very high. Access control is among the most adopted means to assure that sensible information or resources are correctly accessed. This paper provides an overview of main access control models in cloud infrastructures discussing most important challenges. In particular, focusing on access control policy specification, analysis, verification and enforcement, we also identify some emerging issues and point out some solutions and future research directions for cloud computing.

Paper Nr: 48
Title:

:DRHOP, A Platform Proposal for Online Charity

Authors:

Francesco Dagnino and Marina Ribaudo

Abstract: This short paper describes :DRHOP, a project proposal based on the idea of an “open interface” that aims to aggregate different types of “services” and to build around them a community of users who, without changing their usual online (and offline) habits, collect a wallet of drops, a sort of virtual currency that can be donated for charity purposes. The architecture of the system is introduced together with the present version of a proof of concept, currently implemented in the context of online advertising. :DRHOP is still in its initial design phase and important issues like trust, security, and privacy – that are fundamental for the success of a proposal like this – are only partially sketched. Nevertheless, we think the idea is worth to carry on as witnessed by some similar projects, which are also briefly introduced in the paper, showing the interest of the Internet community for online charity experiences.

Paper Nr: 50
Title:

A Model Oriented Approach for Managing Traceability of Biological Samples and Tests of Patients in Assisted Reproduction Clinics

Authors:

L. Morales-Trujillo, V. Cid de la Paz Furest, J. G. Enríquez and José Navarro

Abstract: Assisted reproduction has become a service that more and more people access. Current problems such as the delay in the age of motherhood, single-parent couples, etc. they have proliferated the options and the different treatments that are put at the service of society. A fundamental part of these processes lies in the work of laboratories, in the samples that are handled in clinical processes and then be implanted in future mothers. The management of the samples is a critical aspect that requires all the opportune mechanisms that guarantee the traceability of said samples, avoiding fatal errors. The correct identification, monitoring and control of them is a fundamental aspect and of special relevance. However, the systems currently offered to clinics present important problems. On the one hand, they offer little security, are very expensive or very independent of a specific provider, so that the traceability system cannot be connected to the hospital central management system, or they are very intrusive control systems in the daily work of the laboratory technicians. In this paper a software solution based on the definition of automatic models and protocols is proposed. It includes the appropriate devices, to manage the traceability of the samples in parallel to the work of the laboratory technician.

Paper Nr: 51
Title:

Towards a Declarative Approach to Stateful and Stateless Usage Control for Data Protection

Authors:

Francesco Di Cerbo, Fabio Martinelli, Ilaria Matteucci and Paolo Mori

Abstract: Virtually any online website or service has a rising need for data protection mechanisms, especially for personal data, considering initiatives such as the new General Data Protection Regulation to operate on the EU economic space, or the Cybersecurity Law for the Chinese market. It seems therefore necessary to dispose of mechanisms that help both users, as well as legal experts and practitioners to automatically manage the processing of personal and sensitive data in a secure and compliant manner, to reduce the probability of human errors. To this aim, we show here our initial proposal for an automatically enforceable policy language, UPOL, for access and usage control of personal information, aiming at transparent and accountable data usage. UPOL extends and combines previous research results, U-XACML and PPL, and it is part of a more general proposal to regulate multi-party data sharing operations. A use case is proposed, considering challenges brought by the new EU’s GDPR.

Paper Nr: 54
Title:

Multi-Tenancy: A Concept Whose Time Has Come and (Almost) Gone

Authors:

Christoph Bussler

Abstract: With the emergence of server-less computing the need for multi-tenancy in application services diminishes and eventually disappears as server-less computing supports the isolation between tenants by cloud account automatically. A server-less application installed into a customer’s cloud account is isolated from other customer’s cloud accounts by means of the underlying cloud provider infrastructure automatically. Aside from perfect partitioning in all aspects, this server-less computing simplifies the implementation of an application service since multi-tenancy does not have to be implemented or managed at all by the application service logic itself. The position brought forward in this paper is that the concept of multi-tenancy for application design and implementation is obsolete in context of application services implemented based on server-less computing.

Paper Nr: 61
Title:

A Hierarchical Evaluation Scheme for Pilot-based Research Projects

Authors:

Thomas Zefferer

Abstract: Evaluation is an integral part of most research projects in the information-technology domain. This especially applies to pilot-based projects that develop solutions for the public sector. There, responsible stakeholders require profound evaluation results of executed projects to steer future research activities in the right directions. In practice, most projects apply their own project-specific evaluation schemes. This yields evaluation results that are difficult to compare between projects. Consequently, lessons learned from conducted evaluation processes cannot be aggregated to a coherent holistic picture and the overall gain of executed research projects remains limited. To address this issue, we propose a common evaluation scheme for arbitrary pilot-based research projects targeting the public sector. By relying on a hierarchical approach, the proposed evaluation scheme enables in-depth evaluations of research projects and their pilots, and assures at the same time that evaluation results remain comparable. Application of the proposed evaluation scheme in the scope of an international research project confirms its practical applicability and demonstrates its advantages for all stakeholders involved in the project.

Posters
Paper Nr: 14
Title:

A Decentralized and Remote Controlled Webinar Approach, Utilizing Client-side Capabilities: To Increase Participant Limits and Reduce Operating Costs

Authors:

Roy Meissner, Kurt Junghanns and Michael Martin

Abstract: We present a concept and implementation on increasing the efficiency of webinar software by a remote control approach using the technology WebRTC. This technology enables strong security and privacy, is cross-device usable, uses open-source technology and enables a new level of interactiveness to webinars. We used SlideWiki, WebRTC, and browser speech to text engines to provide innovative accessibility features like multilingual presentations and live subtitles. Our solution was rated for real world usage aspects, tested within the SlideWiki project and we determined technological limits. Such measurements are currently not available and show that our approach outperforms open-source market competitors by efficiency and costs.

Paper Nr: 27
Title:

Cloud Strategies for Software Providers: Strategic Choices for SMEs in the Context of the Cloud Platform Landscape

Authors:

Damian Kutzias and Holger Kett

Abstract: Within this paper, the fundamental question of the hosting challenge for software providers starting with the cloud business, becoming Software-as-a-Service (SaaS) providers, is discussed. Selecting a hosting provider and consuming Infrastructure-as-a-Service (IaaS) is a common and viable solution. Some cooperation-based strategic choices are presented as alternative solutions and compared to the more common approaches. These can hold great potential, especially for Small and Medium-sized Enterprises (SME) when applied meaningfully. For that, the relevant terms Platform-as-a-Service (PaaS) and Cloud Ecosystem are discussed, differentiated and defined with regards to existing definitions and their ambiguities. Following, the outset and challenges are described focussing on the case of SME providers. Last but not least, different strategic choices are presented with their advantages, disadvantages and challenges. These choices are presented in an overview matrix roughly weighted with relative responsibilities as well as ecological and strategic aspects.

Paper Nr: 56
Title:

Sentiment Analysis Approaches based on Granularity Levels

Authors:

Benaissa Azzeddine Rachid, Harbaoui Azza and Ben Ghezala Henda

Abstract: The evolution of web 2.0 has enabled the emergence of social media where users can post, share and discuss their opinions about products, events, peoples and organizations. This increase of the user generated content (UGC) has allowed the publication of several works during the last decade in the scientific community working on sentiment analysis. Sentiment analysis, also known as opinion mining is the field of extraction and analysis of opinions, feelings and attitudes of users on the web. In this paper, we provide an overview of the field of sentiment analysis by discussing the workflow of mining opinions in different granularity levels and covering common and recent approaches and techniques used to solve tasks related to sentiment analysis process at every level.

Paper Nr: 62
Title:

Comparison between Range-based and Prefix Dewey Encoding

Authors:

Ebtesam Taktek, Dhavalkumar Thakker and Daniel Neagu

Abstract: XML is an increasingly important area in the field of data representation and communication over the web. XML data labelling plays an important role in management of XML data since it allows locating the XML content uniquely in order to improve the query performance. This paper focuses on two schemes for labelling native XML databases where the data is represented as ordered XML trees and contain relationships between nodes. We present a comparison between range-based and prefix encoding with focus on achieving labelling time and memory size. In our proposed approach, we employ UTF-8, UTF-16, UTF-23 encoding and decoding for both the labelling schemes.

Area 2 - Mobile and NLP Information Systems

Full Papers
Paper Nr: 6
Title:

Ney Yibeogo - Hello World: A Voice Service Development Platform to Bridge the Web’s Digital Divide

Authors:

André Baart, Anna Bon, Victor de Boer, Wendelien Tuijp and Hans Akkermans

Abstract: The World Wide Web is a crucial open public space for knowledge sharing, content creation and application service provisioning for billions on this planet. Although it has a global reach, still more than three billion people do not have access to the Web, the majority of whom live in the Global South, often in rural regions, under low-resource conditions and with poor infrastructure. However, the need for knowledge sharing, content creation and application service provisioning is no less on the other side of this Digital Divide. In this paper we describe the Kasadaka platform that supports easy creation of local-content and voice-based information services, targeted at currently ‘unconnected’ populations and matching the associated resource and infrastructural requirements. The Kasadaka platform and especially its Voice Service Development Kit supports the formation of an ecosystem of decentralized voice-based information services that serve local populations and communities. This is, in fact, very much analogous to the services and functionalities offered by the Web, but in regions where Internet and Web are absent and will continue to be for the foreseeable future.

Paper Nr: 63
Title:

Agile Smart-device based Multi-factor Authentication for Modern Identity Management Systems

Authors:

Thomas Lenz and Vesna Krnjic

Abstract: Identification and authentication are essential processes in various areas of applications. While these processes are widely described and examined in respect to Web applications that are used on personal computers, the situation is more demanding on smart or mobile devices, as these devices provide other interfaces and have a different user behavior. Additionally, the smart or mobile technology sector has a continuous enhancement that results in no stable technology over the years. Consequently, new usable, agile, and secure methods become necessary to bring identification and secure authentication on smart or mobile platforms. Several proposals are published that should solve this open problem. However, all of them lacks in respect to high-secure authentication by using smart or mobile devices only. In this paper, we propose an agile smart-device based multi-factor authentication model to close the open gap on high-secure authentication. This proposed model combines multiple authenticators on client-side only to increase the assurance of authentication on an eID consumer service. We illustrate the practical applicability of our model by implementing all needed components. Finally, we evaluate the implemented components during the first test in a small group and we a are currently for a wider pilot to evaluate the usability of our proposed model.

Short Papers
Paper Nr: 47
Title:

Physical Web for Smart Campus Management

Authors:

Giorgio Delzanno, Giovanna Guerrini, Maurizio Leotta and Marina Ribaudo

Abstract: Physical Web enables smartphone users to interact with physical objects and locations through the use of beacon technology. Beacons are small devices placed on physical objects or at specific places that can be detected by users’ smartphones when within a range of up to some tens of meters. In this way, users can receive notifications on their handset or associate their presence with a specific place, enabling indoor localization. In this paper, we present the design and the prototype development of a platform for Smart Campus management based on the Physical Web metaphor. This beacon-enabled platform provides services for the registration and analysis of student attendance and for the scheduling of lectures, classrooms allocation, and event notifications (e.g., notify students when teachers are in their office). The software prototype has been implemented using state-of-the-practice technologies such as Node.js, Android, and MySQL and has been preliminary tested in real setting in the context of the Computer Science Bachelor degree at the University of Genova obtaining encouraging results.

Posters
Paper Nr: 46
Title:

Electronic Band Scores and Stage Services Framework

Authors:

Georg J. Schneider and Dimitri Perepelkin

Abstract: The paper describes a framework for the mobile display of electronic sheet music using the UPnP infrastructure for bands. Based on this architecture, further services can be integrated, like MIDI sounds or light shows.

Area 3 - Service Based Information Systems, Platforms and Eco-Systems

Full Papers
Paper Nr: 10
Title:

Psychological Determinants and Consequences of Internet Usage: An Extension of the Technology Acceptance Model

Authors:

Yang Lu, Savvas Papagiannidis and Eleftherios Alamanos

Abstract: The Internet, as a representation of pervasive technological platforms, has brought a series of effects on our everyday life. An exploration of the determinants and consequences of Internet use from the psychological and social perspectives can facilitate current information system studies. As such, this paper hypothesised and examined the effects of a number of psychological factors on technology acceptance. A comprehensive framework has been put forward and empirically tested with data collected from 615 Internet users. Statistical results support that the hypothesised antecedents, i.e. social inclusion and psychological needs satisfaction, have significant effects on users’ beliefs and intentions of Internet usage. Also, the individuals’ continuance intention of using the Internet significantly affects their emotional reactions, well-being, and perceived value of the Internet.

Short Papers
Paper Nr: 37
Title:

A Hybrid Approach to Re-Host and Mix Transactional COBOL and Java Code in Java EE Web Applications using Open Source Software

Authors:

Philipp Brune

Abstract: Despite the common notion of mainframe-based transactional COBOL applications being an outdated technology, in many companies they continue to serve as the IT backbone. Therefore, in the era of big data and cloud services, these applications need to be transformed towards open, service-oriented architectures to preserve their value. This challenge has been tackled by different strategies so far, ranging from adding web service layers to existing mainframe applications to various products providing emulation on non-mainframe platforms. In contrast, in this paper this transformation is considered not as a mere COBOL re-hosting issue, but from the perspective of integrating COBOL in Java EE-based web applications. An open framework is demonstrated for executing existing transactional COBOL programs as part of Java EE application servers. It is build on established Open Source Software (OSS) components and executes on any Un*x-like operating system, in particular also on the mainframe itself.

Paper Nr: 60
Title:

Best Practice-based Evaluation of Software Engineering Tool Support: Collaborative Tool Support for Design, Data Modeling, Specification, and Automated Testing of Service Interfaces

Authors:

Fabian Ohler and Karl-Heinz Krempels

Abstract: Especially in complex software development projects, involving various actors and interaction interdependencies, the design of service interfaces is crucially important. In this work, a structured approach to support the design, specification and documentation of service interface standards is presented. To do so, we refer to a complex use case, dealing with the integration of multiple mobility services on a single platform. This endeavor requires the development of a large number of independently usable service interface standards which adhere to a multitude of quality aspects. A structured approach is required to speed up and simplify development and also to enable synergies between these service interfaces. In a previous work, we performed a requirements analysis to identify important aspects and shortcomings of the current development process and to elicit potential improvements. Starting with a first implementation of collaborative tool support for service interface development, we conducted a best practice-based evaluation with experts of the Association of German Transport Companies (VDV). In this paper, we want to present the results of this focus group-based evaluation and discuss their implications for the envisaged tool support for collaborative service interface development (design, data modeling, specification, and automated testing).

Posters
Paper Nr: 8
Title:

The Internet Connected Production Line: Realising the Ambition of Cloud Manufacturing

Authors:

Chris Turner and Jörn Mehnen

Abstract: This paper outlines a vision for Internet connected production complementary to the Cloud Manufacturing paradigm, reviewing current research and putting forward a generic outline of this form of manufacture. This paper describes the conceptual positioning and practical implementation of the latest developments in manufacturing practice such as Redistributed manufacturing, Cloud Manufacturing and the technologies promoted by Industry 4.0 and Industrial Internet agendas. Existing and future needs for customized production and the manufacturing flexibility required are examined. Future directions for manufacturing, enabled by web based connectivity are then proposed, concluding that the need for humans to remain ‘in the loop’ while automation develops is an essential ingredient of all future manufacturing scenarios.

Paper Nr: 11
Title:

A Monitoring based Multi-Agent Filtering Approach for Web Service Selection

Authors:

Raja Bellakhal, Fatma Siala and Khaled Ghédira

Abstract: During Web services selection processes based on the negotiation approaches, systems initially search for services that comply with the users’ functional requirements. Then, based on the retrieved set of functionally similar services, the negotiators start negotiation in order to come up with an agreement about the QoS parameter preferences. Here the problem occurs when the number of services retrieved during the first step is huge. In such case, the performance of the Web service selection process based on the QoS requirements can be degraded. Examining the specifications of all retrieved services is certainly a waste of time. Instead, it is more reasonable to remove all services that are unavailable and under the users’ requirement expectations before the start of the negotiation. To deal with this issue, in this paper we propose a multi-agent based filtering approach that adopts a Web service monitoring method allowing the filtering and the selection of the best candidate Web services for the selection process. The results of the conducted experimentations demonstrate that adopting an agent-based filtering process decreases the CPU time of the overall Web service selection process.

Paper Nr: 28
Title:

Tailoring Enterprise Architecture Frameworks: Resource Structuring for Service-oriented Enterprises

Authors:

Aleksas Mamkaitis and Markus Helfert

Abstract: Enterprise Architecture (EA) is a discipline concerned primarily with enterprise Business-IT alignment. The Open Group Architecture Framework (TOGAF) is one of the leading EA frameworks that describes methodology to carry out architecture work. TOGAF prescribes tailoring of the architecture frameworks for enterprise architecture initiatives. However, there are no real-world examples of how these frameworks would look like, nor there exist a clear process describing how to construct such frameworks. In this paper, we show how to tailor an enterprise architecture framework for service-oriented enterprises.

Paper Nr: 38
Title:

Analytics in Supply Change Management: Is There a Dark Side?

Authors:

Noushin Ashrafi and Jean-Pierre Kuilboer

Abstract: The growing ability to collect real-time data combined with a desire to optimize efficiency and effectiveness have pushed the organizations to realize the value of analytics and intelligent supply chain. This study offers an overview of supply chain management and the critical role of analytics to enhance supply chain processes and, subsequently, performance. While efficiency and effectiveness is the ultimate measure of success for any organization, this study recommends a look at the consequences of such success in the global economy. As the world of commerce increasingly relies on outsourcing and the cheap labor market, the role of technology to expedite the exploitation of that market should be scrutinized. It is time to discuss not only the contributions of analytics to facilitate supply chain management but also its impact on exploitation through fierce competition among suppliers operating in developing countries.

Paper Nr: 40
Title:

Adaptive System to Support Decision-making of Dairy Ecosystem in Boyacá Department

Authors:

Javier Antonio Ballesteros-Ricaurte, Angela Carrillo-Ramos, Carlos Andrés Parra Acevedo and Juan Erasmo Gómez Morantes

Abstract: Milk ecosystem of Boyacá department is an important economic sector; nevertheless, it presents economic losses due to different problems, particularly, the lack of information to support decision making processes regarding bovine diseases management. It must be emphasized that although there are solutions to this type of problem, most of them only partially fulfill the required functionalities like: disease simulation systems in particular regions, policy management, and notifications for different actors. For these reasons, we propose EiBeLec, an adaptive system to support decision making, where users can visualize information according to their requirements, context, characteristics and information needs, through services. In this position paper, we describe EiBeLec and emphasize on the functionality provided to the government users (e.g., mayor, governor, among others), supporting both decision making and regional infectious disease visualization, in order to provide the authorities, the tools to generate policies and strategies for disease control and eradication.

Paper Nr: 49
Title:

Assessment of Generic Skills through an Organizational Learning Process Model

Authors:

Antonio Balderas, Juan Antonio Caballero-Hernández, Juan Manuel Dodero, Manuel Palomo-Duarte and Iván Ruiz-Rube

Abstract: The performance in generic skills is increasingly important for organizations to succeed in the current competitive environment. However, assessing the level of performance in generic skills of the members of an organization is a challenging task, subject to both subjectivity and scalability issues. Organizations usually lay their organizational learning processes on a Knowledge Management System (KMS). This work presents a process model to support managers of KMSs in the assessment of their individuals’ generic skills. The process model was deployed through an extended version of a learning management system. It was connected with different information system tools specifically developed to enrich its features. A case study with Computer Science final-year students working in a software system was conducted following an authentic learning approach, showing promising results.

Paper Nr: 59
Title:

Water Domiciliary Distribution Telemanagement Value Model

Authors:

Ivo Jorge Magalhães da Costa, José Henrique Pereira São Mamede and Luísa Margarida Cagica Carvalho

Abstract: The Internet of Things (IoT) represents a technical innovation that is already starting to play an important role in smarter water management, when a wide variety of sensors are incorporated into intelligent metering equipment and connected through wireless networks throughout the domiciliary water distribution network, being able to measure volume, flow, temperature, pressure, levels of chlorine, salinity and more. Water scarcity, aging or inadequate water distribution infrastructure, population variation, pollution, more intense and frequent droughts and floods, generate pressures that converge on the need to increase global investment in water infrastructures and to develop solutions for the conservation and management of water. The main stakeholders in the water distribution sector are the ones that can benefit most from the use of telemanagement. However, the results of adopting this innovation are contrary to expectations, with a slow change in traditional business models. The objective of this research is the construction of a value model that allows the identification of actors and value markets and the exchange of value related to the adoption of telemanagement in Portugal, having a solid theoretical basis and a real practical validation.

Area 4 - Web Interfaces

Short Papers
Paper Nr: 30
Title:

Simple Smart Homes Web Interfaces for Blind People

Authors:

Marina Buzzi, Barbara Leporini and Clara Meattini

Abstract: Last-decade great advances in technology have contributed to make home smarter and more comfortable, especially for people with disabilities. A lot of low cost solutions are available on the market, which can be controlled remotely by a Home Automation System (HAS). Unfortunately, the user interfaces are usually designed to be visually oriented which can exclude some user categories, like those who are blind. This paper focuses on the design of usable Web user interfaces for Home Automation Systems, with a special attention to the functions as well as the interface arrangement in order to enhance the interaction via screen reader. The proposed indications could inspire other designers to make the user experience more satisfying and effective for people who interact via screen reader.

Area 5 - Web Intelligence

Full Papers
Paper Nr: 7
Title:

Neural Explainable Collective Non-negative Matrix Factorization for Recommender Systems

Authors:

Felipe Costa and Peter Dolog

Abstract: Explainable recommender systems aim to generate explanations for users according to their predicted scores, the user’s history and their similarity to other users. Recently, researchers have proposed explainable recommender models using topic models and sentiment analysis methods providing explanations based on user’s reviews. However, such methods have neglected improvements in natural language processing, even if these methods are known to improve user satisfaction. In this paper, we propose a neural explainable collective nonnegative matrix factorization (NECoNMF) to predict ratings based on users’ feedback, for example, ratings and reviews. To do so, we use collective non-negative matrix factorization to predict user preferences according to different features and a natural language model to explain the prediction. Empirical experiments were conducted in two datasets, showing the model’s efficiency for predicting ratings and generating explanations. The results present that NECoNMF improves the accuracy for explainable recommendations in comparison with the state-of-art method in 18.3% for NDCG@5, 12.2% for HitRatio@5, 17.1% for NDCG@10, and 12.2% for HitRatio@10 in the Yelp dataset. A similar performance has been observed in the Amazon dataset 7.6% for NDCG@5, 1.3% for HitRatio@5, 7.9% for NDCG@10, and 3.9% for HitRatio@10.

Paper Nr: 15
Title:

Machine Learning-based Query Augmentation for SPARQL Endpoints

Authors:

Mariano Rico, Rizkallah Touma, Anna Queralt and María S. Pérez

Abstract: Linked Data repositories have become a popular source of publicly-available data. Users accessing this data through SPARQL endpoints usually launch several restrictive yet similar consecutive queries, either to find the information they need through trial-and-error or to query related resources. However, instead of executing each individual query separately, query augmentation aims at modifying the incoming queries to retrieve more data that is potentially relevant to subsequent requests. In this paper, we propose a novel approach to query augmentation for SPARQL endpoints based on machine learning. Our approach separates the structure of the query from its contents and measures two types of similarity, which are then used to predict the structure and contents of the augmented query. We test the approach on the real-world query logs of the Spanish and English DBpedia and show that our approach yields high-accuracy prediction. We also show that, by caching the results of the predicted augmented queries, we can retrieve data relevant to several subsequent queries at once, achieving a higher cache hit rate than previous approaches.

Paper Nr: 34
Title:

Semantic Models in Web based Educational System Integration

Authors:

Geraud Fokou Pelap, Catherine Faron Zucker and Fabien Gandon

Abstract: Web based e-Education systems are an important kind of information systems that benefited from Web standards for implementation, deployment and integration. In this paper we propose and evaluate a semantic Web approach to support the features and interoperability of a real industrial e-Education system in production. We show how ontology-based knowledge representation supports the required features, their extension to new ones and the integration of external resources (e.g. official standards) as well as the interoperability with other systems. We designed and implemented a proof of concept in an industrial context that was qualitatively and quantitatively evaluated and we benchmarked different alternatives on real data and real queries. We present a complete evaluation of the quality of service and response time in this industrial context and we show that on a real-world tesbed Semantic Web based solutions can meet the industrial requirements, both in terms of functionalities and efficiency compared to existing operational solutions. We also show that an ontology-oriented modelling opens up new opportunities of advanced functionalities supporting resource recommendation and adaptive learning.

Paper Nr: 55
Title:

An Approach for Generating and Semantically Enriching Dataset Profiles

Authors:

Natacha Targino, Damires Souza and Ana Carolina Salgado

Abstract: The identification of appropriate datasets on the Web for a given task is still a challenge. To help matters, metadata describing datasets can be provided. It is possible to make these metadata available through a Dataset Profile (DSP). In this light, this work presents an approach which generates a DSP composed by descriptive, structural and quality metadata. The DSP is enriched by semantically referencing the provided metadata and by means of some new metadata, such as the dataset domain and some quality metadata, e.g. comprehensibility and processability. In order to evaluate the proposed approach, a prototype has been developed and some experiments have been accomplished.

Short Papers
Paper Nr: 22
Title:

A Pipeline Supporting a Smart Access to Historical Documents based on a Rich Semantic Representation of Their Content: A Case Study on Time Expressions

Authors:

Alessandro Baldo, Anna Goy and Diego Magro

Abstract: This work is part of two ongoing projects whose main goal is to demonstrate how semantic technologies can support an effective access to historical archives. In this paper we present a full pipeline, from rough texts up to the final user interface, aimed at creating and exploiting such representations. The pipeline is structured in three modules - handling information extraction, semantic representations, and queries - and offers external applications the possibility of accessing, and thus re-using, the output of each module, by providing a tagged text, a SPARQL endpoint, and a RESTful web service. In the paper, we describe the details of a proof-of-concept implementation of the pipeline architecture that focuses on time expressions. Moreover, we present an example application that exploits the pipeline to enable users to access historical documents by searching and browsing events and time specifications, thus demonstrating the effectiveness of an access to historical texts based on a rich semantic representation of their content.

Paper Nr: 57
Title:

Current State of the Art to Detect Fake News in Social Media: Global Trendings and Next Challenges

Authors:

Alvaro Figueira, Nuno Guimaraes and Luis Torgo

Abstract: Nowadays, false news can be created and disseminated easily through the many social media platforms, resulting in a widespread real-world impact. Modeling and characterizing how false information proliferates on social platforms and why it succeeds in deceiving readers are critical to develop efficient algorithms and tools for their early detection. A recent surge of researching in this area has aimed to address the key issues using methods based on machine learning, deep learning, feature engineering, graph mining, image and video analysis, together with newly created data sets and web services to identify deceiving content. Majority of the research has been targeting fake reviews, biased messages, and against-facts information (false news and hoaxes). In this work, we present a survey on the state of the art concerning types of fake news and the solutions that are being proposed. We focus our survey on content analysis, network propagation, fact-checking and fake news analysis and emerging detection systems. We also discuss the rationale behind successfully deceiving readers. Finally, we highlight important challenges that these solutions bring.

Posters
Paper Nr: 16
Title:

RDF Mapper: Easy Conversion of Relational Databases to RDF

Authors:

Eliot Bytyçi, Lule Ahmedi and Granit Gashi

Abstract: Nowadays with the raised necessity to serve data through the Web in a rather Linked Data model similar to the way DBpedia extracts structural information from Wikipedia, it is becoming usual to require existing data provided as tables to get mapped into RDF. In this paper, a straightforward alternative to mapping of data from relational database model to RDF is introduced. The mapper has not yet reached its maturity but nevertheless, reduces greatly the amount of time needed for conversion. Furthermore, it allows the user to select parts that are needed for conversion, thus preventing from the unnecessary conversion.

Paper Nr: 18
Title:

The Terror Network Industrial Complex: A Measurement and Analysis of Terrorist Networks and War Stocks

Authors:

James Usher and Pierpaolo Dondio

Abstract: This paper presents a measurement study and analysis of the structure of multiple Islamic terrorist networks to determine if similar characteristics exist between those networks. We examine data gathered from four terrorist groups: Al-Qaeda, ISIS, Lashkar-e-Taiba (LeT) and Jemaah Islamiyah (JI) consisting of six terror networks. Our study contains 471 terrorists’ nodes and 2078 links. Each terror network is compared in terms efficiency, communication and composition of network metrics. The paper examines the effects these terrorist attacks had on US aerospace and defence stocks (herein War stocks). We found that the Islamic terror groups increase recruitment during the planned attacks, communication increases during and after the attacks between the subordinate terrorists and low density is a common feature of Islamic terrorist groups. The Al-Qaeda organisation structure was the most complex and superior in terms of secrecy, diameter, clustering, modularity and density. Jemaah Islamiyah followed a similar structure but not as superior. The ISIS and LeT organisational structures were more concerned with the efficiency of the operation rather than secrecy. We found that war stocks prices and the S+P 500 were lower the day after the attacks, however, the war stocks slightly outperformed the S+P 500 the day after the attacks. Further, we found that war stock prices were significantly lower one month after the terrorist attacks but the S+P 500 rebounded one month later.