In the educational hypermedia domain, adaptive systems try to adapt educational materials according to the required properties of a user. The adaptability of these systems becomes more effective once the system has the knowledge about how a student can learn better. Studies suggest that, for effective personalization, one of the important features is to know precisely the learning style of a student. However, learning styles are dynamic and may vary domain-wise. To address such aspects of learning styles, we have proposed (...) a computationally efficient solution that considers the dynamic and nondeterministic nature of learning styles, effect of the subject domain, and nonstationary aspect during the learning process. The proposed model is novel, robust, and flexible to optimize students’ domain-wise learning style preferences for better content adaptation. We have developed a web-based experimental prototype for assessment and validation. The proposed model is compared with the existing available learning style-based model, and the experimental results show that personalization based on incorporating discipline-wise learning style variations becomes more effective. (shrink)
Nowadays, higher education worldwide is affected by the COVID-19 pandemic. It has affected students’ attendance in the universities and causes universities to close down in more than 190 countries. On the other hand, novice engineers studied only a few lectures related to highway engineering. Their lectures have included very little knowledge about asphalt pavement construction as highway engineering consists of many areas that are not studied in detail during their studying years subject to their traditional education. Due to all mentioned, (...) a new drive to promote online learning paves the way to evaluate our future approach to curriculum development and delivery of educational materials for engineering courses. However, experts can offer solutions to these problems using their past experience. Hence, a system that allows experts to share their experience with other engineers after completing a project is needed. Nevertheless, the web-based expert system for maintaining flexible pavement problems in tropical regions designed in this study is a novel concept. Prior to developing this system, the need for such a system was determined through literature review and validated through a questionnaire survey. Experts were interviewed, and a questionnaire survey was conducted to construct the knowledge base of the system. Knowledge was presented as rules and coded in software through PHP programming. Web pages that support the user interface were designed using a framework that consists of CSS, HTML, and J-Query. Furthermore, the system was tested by an array of users engaged in highway engineering, namely, experts, teaching experts, novice engineers, and students. The mean values of the overall system evaluation performed by 20 users using a five-point Likert scale were 4, 4.5, 3.75, 4.25, 5, 4, and 3.5. Expert and user satisfaction prove the effectiveness of the proposed system. (shrink)
With the current changes driven by the expansion of the World Wide Web, this book uses a different approach from other books on the market: it applies ontologies to electronically available information to improve the quality of knowledge management in large and distributed organizations. Ontologies are formal theories supporting knowledge sharing and reuse. They can be used to explicitly represent semantics of semi-structured information. These enable sophisticated automatic support for acquiring, maintaining and accessing information. Methodology and tools are developed for (...) intelligent access to large volumes of semi-structured and textual information sources in intra- and extra-, and internet-based environments to employ the full power of ontologies in supporting knowledge management from the information client perspective and the information provider. The aim of the book is to support efficient and effective knowledge management and focuses on weakly-structured online information sources. It is aimed primarily at researchers in the area of knowledge management and information retrieval and will also be a useful reference for students in computer science at the postgraduate level and for business managers who are aiming to increase the corporations' information infrastructure. The Semantic Web is a very important initiative affecting the future of the WWW that is currently generating huge interest. The book covers several highly significant contributions to the semantic web research effort, including a new language for defining ontologies, several novel software tools and a coherent methodology for the application of the tools for business advantage. It also provides 3 case studies which give examples of the real benefits to be derived from the adoption of semantic-web based ontologies in "real world" situations. As such, the book is an excellent mixture of theory, tools and applications in an important area of WWW research. * Provides guidelines for introducing knowledge management concepts and tools into enterprises, to help knowledge providers present their knowledge efficiently and effectively. * Introduces an intelligent search tool that supports users in accessing information and a tool environment for maintenance, conversion and acquisition of information sources. * Discusses three large case studies which will help to develop the technology according to the actual needs of large and or virtual organisations and will provide a testbed for evaluating tools and methods. The book is aimed at people with at least a good understanding of existing WWW technology and some level of technical understanding of the underpinning technologies. It will be of interest to graduate students, academic and industrial researchers in the field, and the many industrial personnel who are tracking WWW technology developments in order to understand the business implications. It could also be used to support undergraduate courses in the area but is not itself an introductory text. (shrink)
Purpose – This paper aims to present an overview of the various ethical, societal and critical issues that micro- and nanotechnology-based small, energy self-sufficient sensor systems raise in different selected application fields. An ethical approach on the development of these technologies was taken in a very large international, multitechnological European project. The authors approach and methodology are presented in the paper and, based on this review, the authors propose general principles for this kind of work. Design/methodology/approach – The authors’ approach (...) is based on a great amount of experience working together in multi-disciplinary teams. Ethical issues have usually been handled in the authors’ work to some degree. In this project, the authors had the opportunity to emphasise the human view in technological development, utilise the authors’ experience from previous work and customise the authors’ approach to this particular case. In short, the authors created a wide set of application scenarios with technical and application field experts in the authors’ research project. The scenarios were evaluated with external application field experts, potential consumer users and ethics experts. Findings – Based on the authors’ experiences in this project and in previous work, the authors suggest a preliminary model for construction activity within technology development projects. The authors call this model the Human-Driven Design approach, and Ethics by Design as a more focussed sub-set of this approach. As all enabling technologies have both positive and negative usage possibilities, and so-called ethical assessment tends to focus on negative consequences, there are doubts from some stakeholders about including ethical perspectives in a technology development project. Research limitations/implications – The authors argue that the ethical perspective would be more influential if it were to provide a more positive and constructive contribution to the development of technology. The main findings related to the ethical challenges based on the actual work done in this project were the following: the main user concerns were in relation to access to information, digital division and the necessity of all the proposed measurements; the ethics experts highlighted the main ethical issues as privacy, autonomy, user control, freedom, medicalisation and human existence. Practical implications – Various technology assessment models and ethical approaches for technological development have been developed and performed for a long time, and recently, a new approach called Responsible Research and Innovation has been introduced. The authors’ intention is to give a concrete example for further development as a part of the development of this approach. Social implications – The authors’ study in this particular case covers various consumer application possibilities for small sensor systems. The application fields studied include health, well-being, safety, sustainability and empathic user interfaces. The authors believe that the ethical challenges identified are valuable to other researchers and practitioners who are studying and developing sensor-based solutions in similar fields. Originality/value – The authors’ study covers various consumer application possibilities of small sensor systems. The studied application fields include health, well-being, safety, sustainability and empathic user interfaces. The findings are valuable to other researchers and practitioners who are studying and developing sensor-based solutions to similar fields. (shrink)
In order to improve the management of copyright in the Internet, known as Digital Rights Management, there is the need for a shared language for copyright representation. Current approaches are based on purely syntactic solutions, i.e. a grammar that defines a rights expression language. These languages are difficult to put into practise due to the lack of explicit semantics that facilitate its implementation. Moreover, they are simple from the legal point of view because they are intended just to model (...) the usage licenses granted by content providers to end-users. Thus, they ignore the copyright framework that lies behind and the whole value chain from creators to end-users. Our proposal is to use a semantic approach based on semantic web ontologies. We detail the development of a copyright ontology in order to put this approach into practice. It models the copyright core concepts for creation, rights and the basic kinds of actions that operate on content. Altogether, it allows building a copyright framework for the complete value chain. The set of actions operating on content are our smaller building blocks in order to cope with the complexity of copyright value chains and statements and, at the same time, guarantee a high level of interoperability and evolvability. The resulting copyright modelling framework is flexible and complete enough to model many copyright scenarios, not just those related to the economic exploitation of content. The ontology also includes moral rights, so it is possible to model this kind of situations as it is shown in the included example model for a withdrawal scenario. Finally, the ontology design and the selection of tools result in a straightforward implementation. Description Logic reasoners are used for license checking and retrieval. Rights are modelled as classes of actions, action patterns are modelled also as classes and the same is done for concrete actions. Then, to check if some right or license grants an action is reduced to check for class subsumption, which is a direct functionality of these reasoners. (shrink)
Choice is a sine qua non of contemporary life. From childhood until death, we are faced with an unending series of choices through which we cultivate a sense of self, govern conduct, and shape the future. Nowadays, individuals increasingly experience and enact consumer choice online through web-based platforms such as Yelp.com, TripAdvisor.com and Amazon.com. These platforms not only provide a sprawling array of goods and services to choose from, but also reviews, ratings and ranking devices and systems of classification to (...) navigate this landscape of choice. This paper suggests a radical reconsideration of platform architectures and design features to consider how they reconfigure and respecify choice, ‘choosers’, and choice-making practices. Platforms are not simply cameras that present choice and enable comparisons between different options, but are more akin to engines that govern, drive and expand choice, configuring users within particular discourses, practices and subjectivities. In making sense of the entangled trajectories of consumer choice, platform architectures and Big Data, I suggest that ‘hyper-choice’ emerges as a condition of the contemporary platform-driven web. I examine hyper-choice not only in terms of the relationship between platforms and a growing abundance of choice, but more importantly how platforms reconfigure choice in ways that go beyond and fundamentally challenge existing understandings of what choice is, who and what is involved in producing knowledge about choice, and what it means to be a ‘chooser’. (shrink)
The question “How can humans learn efficiently to make decisions in a complex, dynamic, and uncertain environment” is still a very open question. We investigate what effects arise when feedback is given in a computer-simulated microworld that is controlled by participants. This has a direct impact on training simulators that are already in standard use in many professions, e.g., for flight simulators for pilots, and a potential impact on a better understanding of human decision making in general. Our study is (...) based on a benchmark microworld with an economic framing, the IWR Tailorshop. N=94 participants played four rounds of the microworld, each 10 months, via a web interface. We propose a new approach to quantify performance and learning, which is based on a mathematical model of the microworld and optimization. Six participant groups receive different kinds of feedback in a training phase, then results in a performance phase without feedback are analyzed. As a main result, feedback of optimal solutions in training rounds improved model knowledge, early learning, and performance, especially when this information is encoded in a graphical representation. (shrink)
Ethical tasks faced by researchers in science and engineering as they engage in research include recognition of moral problems in their practice, finding solutions to those moral problems, judging moral actions and engaging in preventive ethics. Given these issues, appropriate pedagogical objectives for research ethics education include (1) teaching researchers to recognize moral issues in their research, (2) teaching researchers to solve practical moral problems in their research from the perspective of the moral agent, (3) teaching researchers how to (...) make moral judgments about actions, and (4) learning to engage in preventive ethics. If web-based research ethics education is intended to be adequate and sufficient for research ethics education, then it must meet those objectives. However there are reasons to be skeptical that it can. (shrink)
When a widely reused ontology appears in a new version which is not compatible with older versions, the ontologies reusing it need to be updated accordingly. Ontobull has been developed to automatically update ontologies with new term IRI(s) and associated metadata to take account of such version changes. To use the Ontobull web interface a user is required to (i) upload one or more ontology OWL source files; (ii) input an ontology term IRI mapping; and (where needed) (iii) provide update (...) settings for ontology headers and XML namespace IDs. Using this information, the backend Ontobull Java program automatically updates the OWL ontology files with desired term IRIs and ontology metadata. The Ontobull subprogram BFOConvert supports the conversion of an ontology that imports a previous version of BFO. A use case is provided to demonstrate the features of Ontobull and BFOConvert. (shrink)
Web services are self-describing and self-contained modular applications based on the network. With the deepening of web service applications, service consumers have gradually increased their requirements for service functions and service quality. Aiming at how to select the optimal plan from a large number of execution plans with the same function and different QoS characteristics, this paper proposes a web service selection algorithm that supports QoS global optimization and dynamic replanning. The algorithm uses position matrix coding to represent all execution (...) paths and replanning information of the service combination. By calculating the Hamming distance of the service quality between individuals, the quality of the service portfolio is improved. By specifying the total user time limit and implementing a good solution retention strategy, the problem of the impact of algorithm running time on service quality is solved. The experimental results show that the method proposed in this paper is effectively integrated into the development trend of QoS and close to the requester’s needs and can better meet user needs. This algorithm improves the user’s satisfaction with the returned service to a certain extent and improves the efficiency of service invocation. (shrink)
The concept of co-production was originally introduced by political science to explain citizen participation in the provision of public goods. The concept was quickly adopted in business research targeting the question how users could be voluntarily integrated into industrial production settings to improve the development of goods and services on an honorary basis. With the emergence of the Social Software and web-based colla-borative infrastructures the concept of co-production gains importance as a theoretical framework for the collaborative production of web content (...) and services. This article argues that co-production is a powerful concept, which helps to explain the emergence of user generated content and the partial transformation of orthodox business models in the content industries. Applying the concept of co-production to developmental policies could help to theorize and derive new models of including underprivileged user groups and communi-ties in collaborative value creation on the web for the mutual benefit of service providers and users. (shrink)
The tsunami effect of the COVID-19 pandemic is affecting many aspects of scientific activities. Multidisciplinary experimental studies with international collaborators are hindered by the closing of the national borders, logistic issues due to lockdown, quarantine restrictions, and social distancing requirements. The full impact of this crisis on science is not clear yet, but the above-mentioned issues have most certainly restrained academic research activities. Sharing innovative solutions between researchers is in high demand in this situation. The aim of this paper (...) is to share our successful practice of using web-based communication and remote control software for real-time long-distance control of brain stimulation. This solution may guide and encourage researchers to cope with restrictions and has the potential to help expanding international collaborations by lowering travel time and costs. (shrink)
Infants show impressive speech decoding abilities and detect acoustic regularities that highlight the syntactic relations of a language, often coded via non-adjacent dependencies. It has been claimed that infants learn NADs implicitly and associatively through passive listening and that there is a shift from effortless associative learning to a more controlled learning of NADs after the age of 2 years, potentially driven by the maturation of the prefrontal cortex. To investigate if older children are able to learn NADs, Lammertink et (...) al. recently developed a word-monitoring serial reaction time task and could show that 6–11-year-old children learned the NADs, as their reaction times increased then they were presented with violated NADs. In the current study we adapted their experimental paradigm and tested NAD learning in a younger group of 52 children between the age of 4–8 years in a remote, web-based, game-like setting. Children were exposed to Italian phrases containing NADs and had to monitor the occurrence of a target syllable, which was the second element of the NAD. After exposure, children did a “Stem Completion” task in which they were presented with the first element of the NAD and had to choose the second element of the NAD to complete the stimuli. Our findings show that, despite large variability in the data, children aged 4–8 years are sensitive to NADs; they show the expected differences in r RTs in the SRT task and could transfer the NAD-rule in the Stem Completion task. We discuss these results with respect to the development of NAD dependency learning in childhood and the practical impact and limitations of collecting these data in a web-based setting. (shrink)
Quality of life is an important outcome measure in mental health care. Currently, QoL is mainly measured with paper and pencil questionnaires. To contribute to the evaluation of treatment, and to enhance substantiated policy decisions in the allocation of resources, a web-based, personalized, patient-friendly and easy to administer QoL instrument has been developed: the QoL-ME. While human values play a significant role in shaping future use practices of technologies, it is important to anticipate on them during the design of the (...) QoL-instrument. The value sensitive design approach offers a theory and method for addressing these values in a systematic and principled manner in the design of technologies. While the VSD approach has been applied in the field of somatic care, we extended the VSD approach to the field of mental healthcare with the aim to enable developers of the QoL-instrument to reflect on important human values and anticipate potential value conflicts in its design. We therefore explored how VSD can be used by investigating the human values that are relevant for the design of the QoL-ME. Our exploration reveals that the values autonomy, efficiency, empowerment, universal usability, privacy, redifinition of roles, of responsibilites, reliability, solidarity, surveillance and trust are at stake for the future users of the technology. However, we argue that theoretical reflections on the potential ethical impact of a technology in the design phase can only go so far. To be able to comprehensively evaluate the usability the VSD approach, a supplementary study of the use practices of the technology is needed. (shrink)
Crimes are increasing in our society as a serious worldwide issue. Fast reporting of crimes is a significantly important area in anticrime. This problem is visible in Iraq as people avoid information-sharing due to the lack of trust in the security system despite some contact lines between citizens and police in Iraq. Furthermore, there has been a little empirical study in this field. We proposed a multi-approach for crime reporting and police control to address these issues. First, this study has (...) two goals: investigating the adopted method in reporting crimes to police sectors to identify the gap and, developing a mobile application for crime reporting and keeping it undisclosed and exclusive for crime witnesses to report. The approach utilised 200 participants to develop the proposed app. Results have shown that the proposed system can quickly monitor and track criminals based on a cloud-based online database. In addition, the application user will specify certain details to be sent, such as location, case type and time. Other information will be sent directly by the system following the designed algorithm. (shrink)
It is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred (...) to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters. Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals. Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely lifelogging and self-monitoring, remote monitoring of vital signs, emotional state recognition, food intake monitoring, activity and behaviour recognition, activity and personal assistance, gesture recognition, fall detection and prevention, mobility assessment and frailty recognition, and cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed. (shrink)
The never ending growth of digital information and the availability of low-cost storage facilities and networks capacity is leading users towards moving their data to remote storage resources. Since users’ data often holds identity-related information, several privacy issues arise when data can be stored in untrusted domains. In addition digital identity management is becoming extremely complicated due to the identity replicas proliferation necessary to get authentication in different domains. GMail and Amazon Web Services, for instance, are two examples of online (...) services adopted by million of users throughout the world which hold huge amounts of sensitive users data. State-of-the-art encryption tools for large-scale distributed infrastructures allow users to encrypt content locally before storing it on a remote untrusted repository. This approach can experience performance drawbacks, when very large data-sets must be encrypted/decrypted on a single machine. The proposed approach extends the existing solutions by providing two additional features: (1) the encryption can also be delegated to a pool of remote trusted computing resources, and (2) the definition of the encryption context which drives the tool to select the best strategy to process the data. The performance benchmarks are based on the results of tests carried out both on a local workstation and on the Grid INFN Laboratory for Dissemination Activities (GILDA) testbed. (shrink)
Environmental landscaping is known to build, plan, and manage landscapes that consider the ecology of a site and produce gardens that benefit both people and the rest of the ecosystem. Landscaping and the environment are combined in landscape design planning to provide holistic answers to complex issues. Seeding native species and eradicating alien species are just a few ways humans influence the region’s ecosystem. Landscape architecture is the design of landscapes, urban areas, or gardens and their modification. It comprises the (...) construction of urban and rural landscapes via coordinating the creation and management of open spaces and economics, finding a job, and working within a confined project budget. There was a lot of discussion about global warming and water shortages. There is a lot of hope to be found even in the face of seemingly insurmountable obstacles. AI is becoming more significant in many urban landscape planning and design elements with the advent of web 4.0 and Human-Centred computing. It created a virtual reality-based landscape to create deep neural networks (DNNs) to make deep learning (DL) more user-friendly and efficient. Users may only manipulate physical items in this environment to manually construct neural networks. These setups are automatically converted into a model, and the real-time testing set is reported and aware of the DNN models that users are producing. This research presents a novel strategy for combining DL-DNN with landscape architecture, providing a long-term solution to the problem of environmental pollution. Carbon dioxide levels are constantly checked when green plants are in and around the house. Plants, on either hand, remove toxins from the air, making it easier to maintain a healthy environment. Human-centered Artificial Intelligence-based web 4.0 may be used to assess and evaluate the data model. The study findings can be sent back into the design process for further modification and optimization. (shrink)
of (from British Columbia Philosophy Graduate Conference) In response to the “Causal Drainage” objection to his Supervenience Argument, Kim introduces micro-based properties and argues that their presence prohibits any causal drainage between metaphysical levels. Noordhof disagrees and instead argues that the causal powers of the �micro-bases� of micro-based properties seem to preempt the causal powers of micro-based properties, in much the same way as Kim claims the powers of subvening base properties preempt the powers of supervenient properties. Thus Noordhof argues (...) that the causal powers of higher-level micro-based properties still seem to drain downward to their lower-level micro-bases. In this paper I will defend Noordhof and argue that in fact this drainage is due to the fact that micro-based properties seem to supervene on their micro-bases. I thus argue that micro-based properties fall victim to the very same Supervenience Argument that Kim himself presents and I conclude that even micro-based properties turn out to be causally impotent if Kim�s Supervenience Argument is sound. (shrink)
In modern day technology, the level of knowledge is increasing day by day. This increase is in terms of volume, velocity, and variety. Understanding of such knowledge is a dire need of an individual to extract meaningful insight from it. With the advancement in computer and image-based technologies, visualization becomes one of the most significant platforms to extract, interpret, and communicate information. In data modelling, visualization is the process of extracting knowledge to reveal the detail data structure and process of (...) data. The proposed study aim is to know about the user knowledge, data modelling, and visualization by handling through the fuzzy logic-based approach. The experimental setup is validated through the data user modelling dataset available in the UCI web repository. The results show that the model is effective and efficient in situations where uncertainty and complexity arise. (shrink)
Marketing in the social network environment integrates current advanced internet and information technologies. This marketing method not only broadens marketing channels and builds a network communication platform but also meets the purchase needs of customers in the entire market and shortens customer purchases. The process is also an inevitable product of the development of the times. However, when companies use social networks for product marketing, they usually face the impact of multiple realistic factors. This article takes the maximization of influence (...) as the main idea to find seed users for product information dissemination and also considers the users’ interest preferences. The target users can influence the product, and the company should control marketing costs to obtain a larger marginal benefit. Based on this, this paper considers factors such as the scale of information diffusion, user interest preferences, and corporate budgets, takes the influence maximization model as a multiobjective optimization problem, and proposes a multiobjective maximization of influence model. To solve the NP-hard problem of maximizing influence, this paper uses Monte Carlo sampling to calculate high-influence users. Next, a seed user selection algorithm based on NSGA-II is proposed to optimize the above three objective functions and find the optimal solution. We use real social network data to verify the performance of models and methods. Experiments show that the proposed model can generate appropriate seed sets and can meet different purposes of information dissemination. Sensitivity analysis proves that our model is robust under different actual conditions. (shrink)
In order to shorten the time for users to query news on the Internet, this paper studies and designs a network news data extraction technology, which can obtain the main news information through the extraction of news text keywords. Firstly, the TF-IDF keyword extraction algorithm, TextRank keyword extraction algorithm, and LDA keyword extraction algorithm are analyzed to understand the keyword extraction process, and the TF-IDF algorithm is optimized by Zipf’s law. By introducing the idea of model fusion, five schemes based (...) on waterfall fusion and parallel combination fusion are designed, and the effects of the five schemes are verified by experiments. It is found that the designed extraction technology has a good effect on network news data extraction. News keyword extraction has a great application prospect, which can provide the basis for the research fields of news key phrases, news abstracts, and so on. (shrink)
This article explores the notion of the Web-extended mind, which is the idea that the technological and informational elements of the Web can sometimes serve as part of the mechanistic substrate that realizes human mental states and processes. It is argued that while current forms of the Web may not be particularly suited to the realization of Web-extended minds, new forms of user interaction technology as well as new approaches to information representation do provide promising new opportunities for Web-based forms (...) of cognitive extension. In addition, it is suggested that extended cognitive systems often rely on the emergence of social practices and conventions that shape how a technology is used. Web-extended minds may thus depend on forms of socio-technical co-evolution in which social forces and factors play just as important a role as do the processes of technology design and development. (shrink)
Systems with human-centered artificial intelligence are always as good as their ability to consider their users’ context when making decisions. Research on identifying people’s everyday activities has evolved rapidly, but little attention has been paid to recognizing both the activities themselves and the motions they make during those tasks. Automated monitoring, human-to-computer interaction, and sports analysis all benefit from Web 4.0. Every sport has gotten its move, and every move is not known to everyone. In ice hockey, every move cannot (...) be monitored by the referee. Here, Convolution Neural Network-based Real-Time Image Processing Framework (CNN-RTIPF) is introduced to classify every move in Ice Hockey. CNN-RTIPF can reduce the challenges in monitoring the player’s move individually. The image of every move is captured and compared with the trained data in CNN. These real-time captured images are processed using a human-centered artificial intelligence system. They compared images predicted by probability calculation of the trained set of images for effective classification. Simulation analysis shows that the proposed CNN-RTIPF can classify real-time images with improved classification ratio, sensitivity, and error rate. The proposed CNN-RTIPF has been validated based on the optimization parameter for reliability. To improve the algorithm for movement identification and train the system for many other everyday activities, human-centered artificial intelligence-based Web 4.0 will continue to develop. (shrink)
Along with the rapid application of new information technologies, the data-driven era is coming, and online consumption platforms are booming. However, massive user data have not been fully developed for design value, and the application of data-driven methods of requirement engineering needs to be further expanded. This study proposes a data-driven expectation prediction framework based on social exchange theory, which analyzes user expectations in the consumption process, and predicts improvement plans to assist designers make better design improvement. According to the (...) classification and concept definition of social exchange resources, consumption exchange elements were divided into seven categories: money, commodity, services, information, value, emotion, and status, and based on these categories, two data-driven methods, namely, word frequency statistics and scale surveys, were combined to analyze user-generated data. Then, a mathematical expectation formula was used to expand user expectation prediction. Moreover, by calculating mathematical expectation, explicit and implicit expectations are distinguished to derive a reliable design improvement plan. To validate its feasibility and advantages, an illustrative example of CoCo Fresh Tea & Juice service system improvement design is further adopted. As an exploratory study, it is hoped that this study provides useful insights into the data mining process of consumption comment. (shrink)
The supervisor of the activities of a system user should benefit from the knowledge contained in the event logs of the user. They allow the monitoring of the sequential and parallel user activities. To make event logs more accessible to the supervisor, we suggest a process mining approach, including first the design of an understanding model of the activities of a system user. The model design is based on the relationships between the event logs and the activities of a system (...) user. An intervention model completes the understanding model to assist the supervisor. The intervention model enables an action of the supervisor on the critical activities, and the detection of anomalies. The models are automatically designed with a model-driven engineering approach. An experiment on a smart home system illustrates this tooled design, where the supervisor is a medical or paramedical staff member. (shrink)
Estimating the effects of introducing a range of smart mobility solutions within an urban area is a crucial concern in urban planning. The lack of a simulator for the assessment of mobility initiatives forces local public authorities and mobility service providers to base their decisions on guidelines derived from common heuristics and best practices. These approaches can help planners in shaping mobility solutions; however, given the high number of variables to consider, the effects are not guaranteed. Therefore, a (...) solution conceived respecting the available guidelines can result in a failure in a different context. In particular, difficult aspects to consider are the interactions between different mobility services available in a given urban area and the acceptance of a given mobility initiative by the inhabitants of the area. In order to fill this gap, we introduce Tangramob, an agent-based simulation framework capable of assessing the impacts of a smart mobility initiative within an urban area of interest. Tangramob simulates how urban traffic is expected to evolve as citizens start experiencing newly offered traveling solutions. This allows decision makers to evaluate the efficacy of their initiatives, taking into account the current urban system. In this paper, we provide an overview of the simulation framework along with its design. To show the potential of Tangramob, three mobility initiatives are simulated and compared in the same scenario. This demonstrates how it is possible to perform comparative experiments so as to align mobility initiatives to the user goals. (shrink)
KM (Knowledge Management) systems have recently been adopted within the realm of enterprise management. On the other hand, data mining technology is widely acknowledged within Information systems' R&D Divisions. Specially, acquisition of meaningful information from Web usage data has become one of the most exciting eras. In this paper, we employ a Web based KM system and propose a framework for applying Web Usage Mining technology to KM data. As it turns out, task duration varies according to different user operations (...) such as referencing a table-of-contents page, down-loading a target file, and writing to a bulletin board. This in turn makes it possible to easily predict the purpose of the user's task. By taking these observations into account, we segmented access log data manually. These results were compared with results abstained by applying the constant interval method. Next, we obtained a segmentation rule of Web access logs by applying a machine-learning algorithm to manually segmented access logs as training data. Then, the newly obtained segmentation rule was compared with other known methods including the time interval method by evaluating their segmentation results in terms of recall and precision rates and it was shown that our rule attained the best results in both measures. Furthermore, the segmented data were fed to an association rule miner and the obtained association rules were utilized to modify the Web structure. (shrink)
Since e-Commerce has become a discipline, e-Contracts are acknowledged as the tools that will assure the safety and robustness of the transactions. A typical e-Contract is a binding agreement between parties that creates relations and obligations. It consists of clauses that address specific tasks of the overall procedure which can be represented as workflows. Similarly to e-Contracts, Intelligent Agents manage a private policy, a set of rules representing requirements, obligations and restrictions, additionally to personal data that meet their user’s interests. (...) In this context, this study aims at proposing a policy-based e-Contract workflow management methodology that can be used by semantic web agents, since agents benefit from Semantic Web technologies for data and policy exchanges, such as RDF and RuleML that maximize interoperability among parties. Furthermore, this study presents the integration of the above methodology into a multi-agent knowledge-based framework in order to deal with issues related to rules exchange where no common syntax is used, since this framework provides reasoning services that assist agents in interpreting the exchanged policies. Finally, a B2C e-Commerce scenario is presented that demonstrates the added value of the approach. (shrink)
Robot manipulators have been extensively used in complex environments to complete diverse tasks. The teleoperation control based on human-like adaptivity in the robot manipulator is a growing and challenging field. This paper developed a disturbance-observer-based fuzzy control framework for a robot manipulator using an electromyography- driven neuromusculoskeletal model. The motion intention was estimated by the EMG-driven NMS model with EMG signals and joint angles from the user. The desired torque was transmitted into the desired velocity for the robot manipulator system (...) through an admittance filter. In the robot manipulator system, a fuzzy logic system, utilizing an integral Lyapunov function, was applied for robot manipulator systems subject to model uncertainties and external disturbances. To compensate for the external disturbances, fuzzy approximation errors, and nonlinear dynamics, a disturbance observer was integrated into the controller. The developed control algorithm was validated with a 2-DOFs robot manipulator in simulation. The results indicate the proposed control framework is effective and crucial for the applications in robot manipulator control. (shrink)
PurposeThe purpose of this paper is to discuss some ethical issues in the internet encounter between customer and bank. Empirical data related to the difficulties that customers have when they deal with the bank through internet technology and electronic banking. The authors discuss the difficulties that customers expressed from an ethical standpoint.Design/methodology/approachThe key problem of the paper is “how does research handle the user's lack of competence in a web‐based commercial environment?” The authors illustrate this ethical dilemma with data (...) from a Danish Bank collected in 2002. The data have been structured by an advanced text analytic method, Pertex.FindingsThe authors can conclude that the experience of lack of competency in internet banking implies a severe damage on the experience of the ethics of the good life and of the respect for the basic ethical principles of customer autonomy, dignity, integrity and vulnerability. However, increased experience of competency may imply experience of increased feeling of ethical superiority and of the good life among customers.Research limitations/implicationsThe important implication for managerial research of this study would be for banks to focus on customer competency with an ethical concern instead of only being concerned with technical solutions for effective internet operations.Practical implicationsSince more and more businesses are digitally based, the authors can foresee a potential generic problem of lack of competence for certain age groups and also of people from different social groups.Originality/valueThe paper provides an analysis of the ethics of on‐line banking on the basis of Pertex methodology and with the use of basic ethical principles of autonomy, dignity, integrity and vulnerability. (shrink)
Web legal information retrieval systems need the capability to reason with the knowledge modeled by legal ontologies. Using this knowledge it is possible to represent and to make inferences about the semantic content of legal documents. In this paper a methodology for applying NLP techniques to automatically create a legal ontology is proposed. The ontology is defined in the OWL semantic web language and it is used in a logic programming framework, EVOLP+ISCO, to allow users to query the semantic content (...) of the documents. ISCO allows an easy and efficient integration of declarative, object-oriented and constraint-based programming techniques with the capability to create connections with external databases. EVOLP is a dynamic logic programming framework allowing the definition of rules for actions and events. An application of the proposed methodology to the legal web information retrieval system of the Portuguese Attorney General’s Office is described. (shrink)
At present, the news broadcast system using mobile network on the market provides the basic functions required by TV stations, but there are still many problems and shortcomings. In view of the main problems existing in the current system and combined with the actual needs of current users, this paper has preliminarily developed a news broadcast system based on 5G Live. The card frame adaptive strategy significantly improves the user experience by using gradual video frame buffering technology. Hardware codec technology (...) significantly reduces the consumption of system resources; H.264 high-compression algorithm can reduce network bandwidth by 50% compared with MPEG-2 and MPEG-4 without a significant change in image quality. At the same time, the use of mobile video acquisition terminals in the system not only solves the problem that satellite broadcast vehicles cannot reach the site due to the lack of roads but also greatly reduces the cost of early deployment and late maintenance of the news broadcast system. This paper studies the card frame adaptive strategy, the system resource consumption reduction solution, and the deployment scheme of the mobile video and audio transmission terminal, which is of great significance to improve the design and research of the news broadcast system under the wireless network application and also has certain reference value for the design of other broadcasting and television solutions. (shrink)
From the end of 2018 in China, the Big-data Driven Price Discrimination of online consumption raised public debate on social media. To study the consumers’ attitude about the BDPD, this study constructed a semantic recognition frame to deconstruct the Affection-Behavior-Cognition consumer attitude theory using machine learning models inclusive of the Labeled Latent Dirichlet Allocation, Long Short-Term Memory, and Snow Natural Language Processing, based on social media comments text dataset. Similar to the questionnaires published results, this article verified that 61% of (...) consumers expressed negative sentiment toward BDPD in general. Differently, on a finer scale, this study further measured the negative sentiments that differ significantly among different topics. The measurement results show that the topics “Regular Customers Priced High” and “Usage Intention” occupy the top two places of negative sentiment among consumers, and the topic “Precision Marketing” is at the bottom. Moreover, semantic recognition results that 49% of consumers’ comments involve multiple topics, indicating that consumers have a pretty clear cognition of the complex status of the BDPD. Importantly, this study found some topics that had not been focused on in previous studies, such as more than 8% of consumers calling for government and legal departments to regulate BDPD behavior, which indicates that quite enough consumers are losing confidence in the self-discipline of the platform enterprises. Another interesting result is that consumers who pursue solutions to the BDPD belong to two mutually exclusive groups: government protection and self-protection. The significance of this study is that it reminds the e-commerce platforms to pay attention to the potential harm for consumers’ psychology while bringing additional profits through the BDPD. Otherwise, the negative consumer attitudes may cause damage to brand image, business reputation, and the sustainable development of the platforms themselves. It also provides the government supervision departments an advanced analysis method reference for more effective administration to protect social fairness. (shrink)
In this position paper, we have used Alan Cooper’s persona technique to illustrate the utility of audio- and video-based AAL technologies. Therefore, two primary examples of potential audio- and video-based AAL users, Anna and Irakli, serve as reference points for describing salient ethical, legal and social challenges related to use of AAL. These challenges are presented on three levels: individual, societal, and regulatory. For each challenge, a set of policy recommendations is suggested.
Discovery of representative Web pages regarding specific topics is important for assisting users' information retrieval from the Web. Researches on Web structure mining, whose goals are to discover or to rank important Web pages based on the graph structure of hyperlinks, have been very active recently. A complete bipartite of Web graph, which is composed of centers (containing useful information regarding specific topic) and fans (containing hyperlinks to centers), can be regarded as a Web community sharing a common interest. Although (...) Murata's method for discovering Web communities is a simple method for finding related Web pages, it has the following weaknesses: (1) since the number of centers increases monotonously, pages irrelevant to the members of Web communities may be added in the process of discovery, and (2) since the number of fans decreases monotonously according as the number of centers increases, the method may suffer topic drift. This paper describes an improved method for refining Web communities in order to acquire representative Web pages of the topics of input Web communities. The method is based on the assumption that most of the fans contain hyperlinks pointing to representative pages regarding their topic, and that hyperlinks to the pages of the same quality often co-occur. In our new method, both fans and centers are renewed iteratively by the result of the majority vote of the members of previous Web community. Results of our experiments show that the new method has abilities of finding desirable pages for several topics. (shrink)
AgeTech involves the use of emerging technologies to support the health, well-being and independent living of older adults. In this paper we focus on how AgeTech based on artificial intelligence (AI) may better support older adults to remain in their own living environment for longer, provide social connectedness, support wellbeing and mental health, and enable social participation. In order to assess and better understand the positive as well as negative outcomes of AI-based AgeTech, a critical analysis of ethical design, digital (...) equity, and policy pathways is required. A crucial question is how AI-based AgeTech may drive practical, equitable, and inclusive multilevel solutions to support healthy, active ageing. In our paper, we aim to show that a focus on equity is key for AI-based AgeTech if it is to realize its full potential. We propose that equity should not just be an extra benefit or minimum requirement, but the explicit aim of designing AI-based health tech. This means that social determinants that affect the use of or access to these technologies have to be addressed. We will explore how complexity management as a crucial element of AI-based AgeTech may potentially create and exacerbate social inequities by marginalising or ignoring social determinants. We identify bias, standardization, and access as main ethical issues in this context and subsequently, make recommendations as to how inequities that stem form AI-based AgeTech can be addressed. (shrink)
This study evaluates the relationship between diversified relationships established under the umbrella of the Stimuli-Organism-Response framework to study the consumer continuation intention of the Airbnb platform from a Malaysian perspective. A web-based survey was conducted among Malaysian Airbnb consumers, and a sample of 303 respondents was obtained. SmartPLS has been used for data analysis. The statistical output of the respondent’s data indicates that social overload and information overload influence consumer continuation intention. Moreover, the satisfaction and trust in the platform partially (...) mediate the relationship between the stimuli and behavioral response. Further, perceived health risk strengthens the negative relationship between continuation and trust in the platform. The theoretical implications include enacting a SOR framework to understand the consumer’s internal state of mind and ability to influence the consumer platform continuation intention. The practical implications suggest that the managers and business owners focus on limiting the social exposure at the host destination and the flow of information from the application. (shrink)
This paper proposes a method for discovering Web communities. A complete bipartite graph K i, j of Web pages can be regarded as a community sharing a common interest. Discovery of such community is expected to assist users’ information retrieval from the Web. The method proposed in this paper is based on the assumption that hyperlinks to related Web pages often co-occur. Relations of Web pages are detected by the co-occurrence of hyperlinks on the pages which are acquired from a (...) search engine by backlink search. In order to find a new member of a Web community, all the hyperlinks contained in the acquired pages are extracted. A page which is pointed by the most frequent hyperlinks is regarded as a new member of the community. We have build a system which discovers complete bipartite graphs based on the method. Only from a few URLs of initial community members, the system succeeds in discovering several genres of Web communities without analyzing the contents of Web pages. (shrink)
This article addresses the question of whetherpersonal surveillance on the world wide web isdifferent in nature and intensity from that inthe offline world. The article presents aprofile of the ways in which privacy problemswere framed and addressed in the 1970s and1990s. Based on an analysis of privacy newsstories from 1999–2000, it then presents atypology of the kinds of surveillance practicesthat have emerged as a result of Internetcommunications. Five practices are discussedand illustrated: surveillance by glitch,surveillance by default, surveillance bydesign, surveillance by (...) possession, andsurveillance by subject. The article offerssome tentative conclusions about theprogressive latency of tracking devices, aboutthe complexity created by multi-sourcing, aboutthe robustness of clickstream data, and aboutthe erosion of the distinction between themonitor and the monitored. These trendsemphasize the need to reject analysis thatframes our understanding of Internetsurveillance in terms of its impact onsociety. Rather the Internet should beregarded as a form of life whose evolvingstructure becomes embedded in humanconsciousness and social practice, and whosearchitecture embodies an inherent valence thatis gradually shifting away from the assumptionsof anonymity upon which the Internet wasoriginally designed. (shrink)
This article considers the government, opinion leaders, and Internet users to be a system for correcting false information, and it considers the problem of correcting false information that arises in the aftermath of major emergencies. We use optimal control theory and differential game theory to construct differential game models of decentralized decision-making, centralized decision-making, and subsidized decision-making. The solutions to these models and their numerical simulations show that the government, opinion leaders, and Internet users exercise cost-subsidized decision-making instead of (...) decentralized decision-making. The equilibrium strategies, local optimal benefits, and overall optimal benefits of the system achieve Pareto improvement. Given the goal of maximizing the benefits to the system under centralized decision-making, the equilibrium results are Pareto-optimal. The research here provides a theoretical basis for dealing with the mechanism of correcting false information arising from major emergencies, and our conclusions provide methodological support for the government to effectively deal with such scenarios. (shrink)
Web mining refers to the whole of data miningand related techniques that are used toautomatically discover and extract informationfrom web documents and services. When used in abusiness context and applied to some type ofpersonal data, it helps companies to builddetailed customer profiles, and gain marketingintelligence. Web mining does, however, pose athreat to some important ethical values likeprivacy and individuality. Web mining makes itdifficult for an individual to autonomouslycontrol the unveiling and dissemination of dataabout his/her private life. To study thesethreats, we (...) distinguish between `content andstructure mining' and `usage mining.' Webcontent and structure mining is a cause forconcern when data published on the web in acertain context is mined and combined withother data for use in a totally differentcontext. Web usage mining raises privacyconcerns when web users are traced, and theiractions are analysed without their knowledge.Furthermore, both types of web mining are oftenused to create customer files with a strongtendency of judging and treating people on thebasis of group characteristics instead of ontheir own individual characteristics and merits(referred to as de-individualisation). Althoughthere are a variety of solutions toprivacy-problems, none of these solutionsoffers sufficient protection. Only a combinedsolution package consisting of solutions at anindividual as well as a collective level cancontribute to release some of the tensionbetween the advantages and the disadvantages ofweb mining. The values of privacy andindividuality should be respected and protectedto make sure that people are judged and treatedfairly. People should be aware of theseriousness of the dangers and continuouslydiscuss these ethical issues. This should be ajoint responsibility shared by web miners (bothadopters and developers), web users, andgovernments. (shrink)
Human creativity generates novel ideas to solve real-world problems. This thereby grants us the power to transform the surrounding world and extend our human attributes beyond what is currently possible. Creative ideas are not just new and unexpected, but are also successful in providing solutions that are useful, efficient and valuable. Thus, creativity optimizes the use of available resources and increases wealth. The origin of human creativity, however, is poorly understood, and semantic measures that could predict the success of (...) generated ideas are currently unknown. Here, we analyze a dataset of design problem-solving conversations in real-world settings by using 49 semantic measures based on WordNet 3.1 and demonstrate that a divergence of semantic similarity, an increased information content, and a decreased polysemy predict the success of generated ideas. The first feedback from clients also enhances information content and leads to a divergence of successful ideas in creative problem solving. These results advance cognitive science by identifying real-world processes in human problem solving that are relevant to the success of produced solutions and provide tools for real-time monitoring of problem solving, student training and skill acquisition. A selected subset of information content (IC Sánchez–Batet) and semantic similarity (Lin/Sánchez–Batet) measures, which are both statistically powerful and computationally fast, could support the development of technologies for computer-assisted enhancements of human creativity or for the implementation of creativity in machines endowed with general artificial intelligence. (shrink)
Regulations affect every aspect of our lives. Compliance with the regulations impacts citizens and businesses similarly: they have to find their rights and obligations in the complex legal environment. The situation is more complex when languages and time versions of regulations should be considered. To propose a solution to these demands, we present a semantic enrichment approach which aims at decreasing the ambiguousness of legal texts, increasing the probability of finding the relevant legal materials, and utilizing the application of legal (...) reasoners. Our approach is also implemented both as a service for citizens and businesses and as a modeling environment for legal drafters. To evaluate the usefulness of the approach, a case study was carried out in a large organization and applied to corporate regulations and Hungarian laws. The results suggest this approach can support the previous aims. (shrink)
As the internet becomes the basic resource of information, not only texts but images retrieval systems have been appeared. However, many of those supply only a list of images, so we have to seek the expecting images one by one. Although, image labeling is one of the solutions of such a problem, various words are labeled to an image if the words are extracted from only one Web page. Therefore, this paper proposes an image clustering system that labels images (...) by words related to a search keyword. This relationships are measured by Web pages in WWW. By the experimental results, users were enabled to find the intended images more faster than the ordinal image search system. (shrink)
In this paper, we develop an organization method of page information agents for adaptive interface between a user and a Web search engine. Though a Web search engine indicates a hit list of Web pages to user’s query using a large database, they includes many useless ones. Thus a user has to select useful Web pages from them with page information indicated on the hit list, and actually fetch the Web page for investigating the relevance. Unfortunately, since the page information (...) on a hit list is neither sufficient nor necessary for a user, the adequate information is necessary for valid selection. However which information is adequate depends on a user and a task. Hence we propose adaptive interface AOAI in which different page information agents are organized through man-machine interaction. In AOAI, the page information agents indicating different page information on a hit list like the file-size, network traffic and a page title are prepared at first. A user evaluates them through searching with a search engine, and the agents are organized based on the evaluation. As results, different organizations are achieved depending on a user and a task. Finally we make experiments with subjects and find out AOAI is promising for adaptive interface between a user and a search engine. (shrink)
Recently, many opportunities have emerged to use the Internet in daily life and classrooms. However, with the growth of the World Wide Web (Web), it is becoming increasingly difficult to find target information on the Internet. In this study, we explore a method for developing the ability of users in information seeking on the Web and construct a search process feedback system supporting reflective activities of information seeking on the Web. Reflection is defined as a cognitive activity for monitoring, evaluating, (...) and modifying one's thinking and process. In the field of learning science, many researchers have investigated reflective activities that facilitate learners' problem solving and deep understanding. The characteristics of this system are: (1) to show learners' search processes on the Web as described, based on a cognitive schema, and (2) to prompt learners to reflect on their search processes. We expect that users of this system can reflect on their search processes by receiving information on their own search processes provided by the system, and that these types of reflective activity helps them to deepen their understanding of information seeking activities. We have conducted an experiment to investigate the effects of our system. The experimental results confirmed that (1) the system actually facilitated the learners' reflective activities by providing process visualization and prompts, and (2) the learners who reflected on their search processes more actively understood their own search processes more deeply. (shrink)