The urgent drive for vaccine development in the midst of the current COVID-19 pandemic has prompted public and private organisations to invest heavily in research and development of a COVID-19 vaccine. Organisations globally have affirmed the commitment of fair global access, but the means by which a successful vaccine can be mass produced and equitably distributed remains notably unanswered. Barriers for low-income countries include the inability to afford vaccines as well as inadequate resources to vaccinate, barriers that are exacerbated during (...) a pandemic. Fair distribution of a pandemic vaccine is unlikely without a solid ethical framework for allocation. This piece analyses four allocation paradigms: ability to develop or purchase; reciprocity; ability to implement; and distributive justice, and synthesises their ethical considerations to develop an allocation model to fit the COVID-19 pandemic. (shrink)
Achieving the global benefits of artificial intelligence will require international cooperation on many areas of governance and ethical standards, while allowing for diverse cultural perspectives and priorities. There are many barriers to achieving this at present, including mistrust between cultures, and more practical challenges of coordinating across different locations. This paper focuses particularly on barriers to cooperation between Europe and North America on the one hand and East Asia on the other, as regions which currently have an outsized impact on (...) the development of AI ethics and governance. We suggest that there is reason to be optimistic about achieving greater cross-cultural cooperation on AI ethics and governance. We argue that misunderstandings between cultures and regions play a more important role in undermining cross-cultural trust, relative to fundamental disagreements, than is often supposed. Even where fundamental differences exist, these may not necessarily prevent productive cross-cultural cooperation, for two reasons: cooperation does not require achieving agreement on principles and standards for all areas of AI; and it is sometimes possible to reach agreement on practical issues despite disagreement on more abstract values or principles. We believe that academia has a key role to play in promoting cross-cultural cooperation on AI ethics and governance, by building greater mutual understanding, and clarifying where different forms of agreement will be both necessary and possible. We make a number of recommendations for practical steps and initiatives, including translation and multilingual publication of key documents, researcher exchange programmes, and development of research agendas on cross-cultural topics. (shrink)
In his classic book “the Foundations of Statistics” Savage developed a formal system of rational decision making. The system is based on (i) a set of possible states of the world, (ii) a set of consequences, (iii) a set of acts, which are functions from states to consequences, and (iv) a preference relation over the acts, which represents the preferences of an idealized rational agent. The goal and the culmination of the enterprise is a representation theorem: Any preference relation that (...) satisfies certain arguably acceptable postulates determines a (finitely additive) probability distribution over the states and a utility assignment to the consequences, such that the preferences among acts are determined by their expected utilities. Additional problematic assumptions are however required in Savage's proofs. First, there is a Boolean algebra of events (sets of states) which determines the richness of the set of acts. The probabilities are assigned to members of this algebra. Savage's proof requires that this be a σ-algebra (i.e., closed under infinite countable unions and intersections), which makes for an extremely rich preference relation. On Savage's view we should not require subjective probabilities to be σ-additive. He therefore finds the insistence on a σ-algebra peculiar and is unhappy with it. But he sees no way of avoiding it. Second, the assignment of utilities requires the constant act assumption: for every consequence there is a constant act, which produces that consequence in every state. This assumption is known to be highly counterintuitive. The present work contains two mathematical results. The first, and the more difficult one, shows that the σ-algebra assumption can be dropped. The second states that, as long as utilities are assigned to finite gambles only, the constant act assumption can be replaced by the more plausible and much weaker assumption that there are at least two non-equivalent constant acts. The second result also employs a novel way of deriving utilities in Savage-style systems -- without appealing to von Neumann-Morgenstern lotteries. The paper discusses the notion of “idealized agent" that underlies Savage's approach, and argues that the simplified system, which is adequate for all the actual purposes for which the system is designed, involves a more realistic notion of an idealized agent. (shrink)
There is a long-standing disagreement in the philosophy of probability and Bayesian decision theory about whether an agent can hold a meaningful credence about an upcoming action, while she deliberates about what to do. Can she believe that it is, say, 70% probable that she will do A, while she chooses whether to do A? No, say some philosophers, for Deliberation Crowds Out Prediction (DCOP), but others disagree. In this paper, we propose a valid core for DCOP, and identify terminological (...) causes for some of the apparent disputes. (shrink)
Can an agent deliberating about an action A hold a meaningful credence that she will do A? 'No', say some authors, for 'Deliberation Crowds Out Prediction' (DCOP). Others disagree, but we argue here that such disagreements are often terminological. We explain why DCOP holds in a Ramseyian operationalist model of credence, but show that it is trivial to extend this model so that DCOP fails. We then discuss a model due to Joyce, and show that Joyce's rejection of DCOP rests (...) on terminological choices about terms such as 'intention', 'prediction', and 'belief'. Once these choices are in view, they reveal underlying agreement between Joyce and the DCOP-favouring tradition that descends from Ramsey. Joyce's Evidential Autonomy Thesis (EAT) is effectively DCOP, in different terminological clothing. Both principles rest on the so-called 'transparency' of first-person present-tensed reflection on one's own mental states. (shrink)
In countries such as China, where Confucianism is the backbone of national culture, high-social-status entrepreneurs are inclined to engage in corporate social responsibility activities due to the perceived high stress from stakeholders and high ability of doing CSR. Based on a large-scale survey of private enterprises in China, our paper finds that Chinese entrepreneurs at private firms who have high social status are prone to engage in social responsibility efforts. In addition, high-social-status Chinese entrepreneurs are even more likely to engage (...) in social responsibility efforts as they become more politically connected and as the region becomes more market-oriented. These findings extend the upper echelons perspective of CSR into Chinese context by shedding light on antecedents of CSR from a new perspective and clarifying the boundary conditions of the social status–CSR link from the institutional perspective. (shrink)
Causalists and Evidentialists can agree about the right course of action in an (apparent) Newcomb problem, if the causal facts are not as initially they seem. If declining $1,000 causes the Predictor to have placed $1m in the opaque box, CDT agrees with EDT that one-boxing is rational. This creates a difficulty for Causalists. We explain the problem with reference to Dummett's work on backward causation and Lewis's on chance and crystal balls. We show that the possibility that the causal (...) facts might be properly judged to be non-standard in Newcomb problems leads to a dilemma for Causalism. One horn embraces a subjectivist understanding of causation, in a sense analogous to Lewis's own subjectivist conception of objective chance. In this case the analogy with chance reveals a terminological choice point, such that either (i) CDT is completely reconciled with EDT, or (ii) EDT takes precedence in the cases in which the two theories give different recommendations. The other horn of the dilemma rejects subjectivism, but now the analogy with chance suggests that it is simply mysterious why causation so construed should constrain rational action. (shrink)
This paper offers a fine analysis of different versions of the well known sure-thing principle. We show that Savage's formal formulation of the principle, i.e., his second postulate (P2), is strictly stronger than what is intended originally.
The event-triggered consensus control for leader-following multiagent systems subjected to external disturbances is investigated, by using the output feedback. In particular, a novel distributed event-triggered protocol is proposed by adopting dynamic observers to estimate the internal state information based on the measurable output signal. It is shown that under the developed observer-based event-triggered protocol, multiple agents will reach consensus with the desired disturbance attenuation ability and meanwhile exhibit no Zeno behaviors. Finally, a simulation is presented to verify the obtained results.
Savage's framework of subjective preference among acts provides a paradigmatic derivation of rational subjective probabilities within a more general theory of rational decisions. The system is based on a set of possible states of the world, and on acts, which are functions that assign to each state a consequence. The representation theorem states that the given preference between acts is determined by their expected utilities, based on uniquely determined probabilities (assigned to sets of states), and numeric utilities assigned to consequences. (...) Savage's derivation, however, is based on a highly problematic well-known assumption not included among his postulates: for any consequence of an act in some state, there is a "constant act" which has that consequence in all states. This ability to transfer consequences from state to state is, in many cases, miraculous -- including simple scenarios suggested by Savage as natural cases for applying his theory. We propose a simplification of the system, which yields the representation theorem without the constant act assumption. We need only postulates P1-P6. This is done at the cost of reducing the set of acts included in the setup. The reduction excludes certain theoretical infinitary scenarios, but includes the scenarios that should be handled by a system that models human decisions. (shrink)
Recently, infrared human action recognition has attracted increasing attention for it has many advantages over visible light, that is, being robust to illumination change and shadows. However, the infrared action data is limited until now, which degrades the performance of infrared action recognition. Motivated by the idea of transfer learning, an infrared human action recognition framework using auxiliary data from visible light is proposed to solve the problem of limited infrared action data. In the proposed framework, we first construct a (...) novel Cross-Dataset Feature Alignment and Generalization framework to map the infrared data and visible light data into a common feature space, where Kernel Manifold Alignment and a dual aligned-to-generalized encoders model are employed to represent the feature. Then, a support vector machine is trained, using both the infrared data and visible light data, and can classify the features derived from infrared data. The proposed method is evaluated on InfAR, which is a publicly available infrared human action dataset. To build up auxiliary data, we set up a novel visible light action dataset XD145. Experimental results show that the proposed method can achieve state-of-the-art performance compared with several transfer learning and domain adaptation methods. (shrink)
This paper proposes an innovative ducted fan aerial manipulator, which is particularly suitable for the tasks in confined environment, where traditional multirotors and helicopters would be inaccessible. The dynamic model of the aerial manipulator is established by comprehensive mechanism and parametric frequency-domain identification. On this basis, a composite controller of the aerial platform is proposed. A basic static robust controller is designed via H-infinity synthesis to achieve basic performance, and an adaptive auxiliary loop is designed to estimate and compensate for (...) the effect acting on the vehicle from the manipulator. The computer simulation analyses show good stability of the aerial vehicle under the manipulator motion and good tracking performance of the manipulator end effector, which verify the feasibility of the proposed aerial manipulator design and the effectiveness of the proposed controller, indicating that the system can meet the requirements of high precision operation tasks well. (shrink)
The recent rapid development of information technology, such as sensing technology, communications technology, and database, allows us to use simulation experiments for analyzing serious accidents caused by hazardous chemicals. Due to the toxicity and diffusion of hazardous chemicals, these accidents often lead to not only severe consequences and economic losses, but also traffic jams at the same time. Emergency evacuation after hazardous chemical accidents is an effective means to reduce the loss of life and property and to smoothly resume the (...) transport network as soon as possible. This paper considers the dynamic changes of the hazardous chemicals’ concentration after their leakage and simulates the diffusion process. Based on the characteristics of emergency evacuation of hazardous chemical accidents, we build a mixed-integer programming model and design a heuristic algorithm using network optimization and diffusion simulation. We then verify the validity and feasibility of the algorithm using Jinan, China, as a computational example. In the end, we compare the results from different scenarios to explore the key factors affecting the effectiveness of the evacuation process. (shrink)
We developed an integrated method that can better constrain subsalt tomography using geology, thermal history modeling, and rock-physics principles. This method, called rock-physics-guided velocity modeling for migration uses predicted pore pressure as a guide to improve the quality of the earth model. We first generated a rock-physics model that provided a range of plausible pore pressure that lies between hydrostatic and fracture pressure. The range of plausible pore pressures was then converted into a range of plausible depth varying velocities as (...) a function of pore pressure that is consistent with geology and rock physics. Such a range of plausible velocities is called the rock-physics template. Such a template was then used to flatten the seismic gathers. We call this the pore-pressure scan technique. The outcome of the pore-pressure scan process was an “upper” and “lower” bound of pore pressure for a given earth model. Such velocity bounds were then used as constraints on the subsequent tomography, and further iterations were carried out. The integrated method not only flattened the common image point gathers but also limited the velocity field to its physically and geologically plausible range without well control; for example, in the study area it produced a better image and pore-pressure prognosis below salt. We determined that geologic control is essential, and we used it for stratigraphy, structure, and unconformity, etc. The method had several subsalt applications in the Gulf of Mexico and proved that subsalt pore pressure can be reliably predicted. (shrink)
This short paper has two parts. First, we prove a generalisation of Aumann's surprising impossibility result in the context of rational decision making. We then move, in the second part, to discuss the interpretational meaning of some formal setups of epistemic models, and we do so by means of presenting an interesting puzzle in epistemic logic. The aim is to highlight certain problematic aspects of these epistemic systems concerning first/third-person asymmetry which underlies both parts of the story. This asymmetry, we (...) argue, reveals certain limits of what epistemic models can be. (shrink)
This paper addresses the issue of finite versus countable additivity in Bayesian probability and decision theory -- in particular, Savage's theory of subjective expected utility and personal probability. I show that Savage's reason for not requiring countable additivity in his theory is inconclusive. The assessment leads to an analysis of various highly idealised assumptions commonly adopted in Bayesian theory, where I argue that a healthy dose of, what I call, conceptual realism is often helpful in understanding the interpretational value of (...) sophisticated mathematical structures employed in applied sciences like decision theory. In the last part, I introduce countable additivity into Savage's theory and explore some technical properties in relation to other axioms of the system. (shrink)
This paper proposes a novel adaptive dynamic programming approach to address the optimal consensus control problem for discrete-time multiagent systems. Compared with the traditional optimal control algorithms for MASs, the proposed algorithm is designed on the basis of the event-triggered scheme which can save the communication and computation resources. First, the consensus tracking problem is transferred into the input-state stable problem. Based on this, the event-triggered condition for each agent is designed and the event-triggered ADP is presented. Second, neural networks (...) are introduced to simplify the application of the proposed algorithm. Third, the stability analysis of the MASs under the event-triggered conditions is provided and the estimate errors of the neural networks’ weights are also proved to be ultimately uniformly bounded. Finally, the simulation results demonstrate the effectiveness of the event-triggered ADP consensus control method. (shrink)
The notion of comparative probability defined in Bayesian subjectivist theory stems from an intuitive idea that, for a given pair of events, one event may be considered “more probable” than the other. Yet it is conceivable that there are cases where it is indeterminate as to which event is more probable, due to, e.g., lack of robust statistical information. We take that these cases involve indeterminate comparative probabilities. This paper provides a Savage-style decision-theoretic foundation for indeterminate comparative probabilities.
It is widely taken that the first-order part of Frege's Begriffsschrift is complete. However, there does not seem to have been a formal verification of this received claim. The general concern is that Frege's system is one axiom short in the first-order predicate calculus comparing to, by now, the standard first-order theory. Yet Frege has one extra inference rule in his system. Then the question is whether Frege's first-order calculus is still deductively sufficient as far as the first-order completeness is (...) concerned. In this short note we confirm that the missing axiom is derivable from his stated axioms and inference rules, and hence the logic system in the Begriffsschrift is indeed first-order complete. (shrink)
As an important influencing factor of construction workers' safety performance, safety stressor has received increasing attention. However, no consensus has been reached on the relationship between different types of safety stressors and the subdimensions of safety performance, and the mechanism by which safety stressors influence safety performance remains unclear. This study proposed a multiple mediation model with ego depletion and self-efficacy as mediators between safety stressors and workers' safety performance. Data were collected from 335 construction workers in China. Results demonstrated (...) that: the three types of safety stressors all had negative effects on workers' safety performance ; self-efficacy mediated all the relationships between the three safety stressors and safety performance; ego depletion only mediated part of the relationships between the three safety stressors and safety performance; and only part of the multiple-step mediating effects through ego depletion and self-efficacy were supported. This study made contributions by shedding light on the mechanism by which safety stressors influence workers' safety performance and providing more empirical evidence for the relationship between various safety stressors and the subdimensions of safety performance. Additionally, targeted strategies for improving workers' safety performance were proposed according to the findings. (shrink)
In the present study, we tested the effectiveness of color coding on the programming learning of students who were learning from video lectures. Effectiveness was measured using multimodal physiological measures, combining eye tracking and electroencephalography. Using a between-subjects design, 42 university students were randomly assigned to two video lecture conditions. The participants’ eye tracking and EEG signals were recorded while watching the assigned video, and their learning performance was subsequently assessed. The results showed that the color-coded design was more beneficial (...) than the grayscale design, as indicated by smaller pupil diameter, shorter fixation duration, higher EEG theta and alpha band power, lower EEG cognitive load, and better learning performance. The present findings have practical implications for designing slide-based programming learning video lectures; slides should highlight the format of the program code using color coding. (shrink)
Objective: This study aimed to explore the relationship among cognitive fusion, experiential avoidance, and obsessive–compulsive symptoms in patients with obsessive–compulsive disorder.Methods: A total of 118 outpatient and inpatient patients with OCD and 109 healthy participants, gender- and age-matched, were selected using cognitive fusion questionnaire, acceptance and action questionnaire−2nd edition, Yale–Brown scale for obsessive–compulsive symptoms, Hamilton anxiety scale, and Hamilton depression scale for questionnaire testing and data analysis.Results: The levels of cognitive fusion and experiential avoidance in the OCD group were significantly (...) higher than those in the healthy control group. Regression analysis results showed that, in predicting the total score of obsessive–compulsive symptoms, AAQ-II and CFQ entered the equation, which explained 17.1% variance. In predicting anxiety, only AAQ-II entered the equation, which explained 13% variance. In the prediction of depression, AAQ-II entered the equation, which explained 17.7% variance.Conclusion: Cognitive fusion and experiential avoidance may be important factors for the maintenance of OCD, and experiential avoidance can positively predict the anxiety and depression of OCD patients. (shrink)
Scientists normally earn less money than many other professions which require a similar amount of training and qualification. The economic theory of marginal utility and cost-benefit analysis can be applied to explain this phenomenon. Although scientists make less money than entertainment stars, the scientists do research work out of their interest and they also enjoy a much higher reputation and social status in some countries.
Interpersonal physiological synchrony has been consistently found during collaborative tasks. However, few studies have applied synchrony to predict collaborative learning quality in real classroom. To explore the relationship between interpersonal physiological synchrony and collaborative learning activities, this study collected electrodermal activity and heart rate during naturalistic class sessions and compared the physiological synchrony between independent task and group discussion task. The students were recruited from a renowned university in China. Since each student learn differently and not everyone prefers collaborative learning, (...) participants were sorted into collaboration and independent dyads based on their collaborative behaviors before data analysis. The result showed that, during group discussions, high collaboration pairs produced significantly higher synchrony than low collaboration dyads. Given the equivalent engagement level during independent and collaborative tasks, the difference of physiological synchrony between high and low collaboration dyads was triggered by collaboration quality. Building upon this result, the classification analysis was conducted, indicating that EDA synchrony can identify different levels of collaboration quality. (shrink)
The purpose of the present study was to explore the direct influence of self-concept and self-imagination on English language learning outcomes. Furthermore, this study examined the mediating role of self-efficacy in the relationship between self-concept, self-imagination, and ELLO. A survey questionnaire of 21 items was used in this study. We distributed the questionnaire through QR code and collected the data from 2,517 participants who enrolled in blended learning courses at the undergraduate level in Chinese universities. The relationship among the variables (...) was measured through SmartPLS-SEM 3.3.3. The outcomes of the present study indicated a direct, positive, and significant connection of self-concept, self-imagination, and self-efficacy with ELLO. Looking at indirect influences, self-concept and self-imagination, positive and significant, influence ELLO through self-efficacy. Thus, self-efficacy was indicated to play a mediating role between self-concept, self-imagination and ELLO. We can conclude that self-concept, self-imagination, and self-efficacy are the main predictors of ELLO in blended learning courses during the pandemic. Additionally, self-concept and self-imagination along with the intervening role of self-efficacy, play a more effective role in improving ELLO. Moreover, this study provided some useful, practical implications, and future research directions. (shrink)
At present, with the continuous rise in public consumption level, the pressure on college students’ entrepreneurship or employment is increasingly severe. Under the concept of positive psychological intervention, the present work aims to alleviate the entrepreneurial pressure of college students and improve college students’ entrepreneurial education through the analysis of enterprise management elements. A 3-month intervention experiment, including the pre-test, preventive curriculum intervention, post-test, and delayed test, is conducted on a control group and an experimental group, to investigate entrepreneurial intention, (...) emotional management ability, and ability to deal with entrepreneurial pressure of college students. In addition, based on a complex adaptive system, the enterprise management elements are analyzed, and a three-layer network model is constructed. Meanwhile, new diversified elements of enterprise management are defined to discuss the effectiveness and psychological impact of diversified management, proving that psychological security plays an intermediary role in the cross-layer relationship chain in the three-layer CAS network. The experimental results indicate that on the whole, the positive psychological intervention reduces the pressure of students in the experimental group, significantly ameliorates depression and anxiety, and promotes the positive personality in all directions. Besides, in the delayed test after 3 months, the experimental group can maintain a relatively better state than the control group. By exploring the role effectiveness and characteristics of diversified management, this experiment confirms that the improvement of psychological security under positive psychological intervention has a positive impact on the effectiveness of diversified management. The present work discusses the hierarchical construction in enterprise management and puts forward reasonable suggestions and theoretical development for the influence of the entrepreneurial practice of college students. (shrink)
In late 2019, the COVID-19 pandemic began to spread over the world, causing millions of deaths. In the first few months of the pandemic, several countries prevented the spread of the pandemic successfully. By contrast, the pandemic in many other countries was not controlled well. For example, India encountered a second serious outbreak of COVID-19 from April 2021 due to the poor resistance measures implemented by the government. To figure out the effective countermeasures to the pandemic, this research proposes a (...) COVID-19 pandemic and its response system, which consists of the infection subsystem, the quarantine subsystem, and the medical subsystem. On this basis, an improved SEIR-SD model is established which is utilized to analyze the response measures to the pandemic quantitatively. This model successfully simulates the actual epidemic scenarios in Wuhan, which verifies its effectiveness. Afterward, the impact of hospital administration rate, quarantine rate, average contact number, and contact infection rate on the cumulative number of infections and deaths are analyzed by simulation. The results show that both the medical and administrative efforts, especially in the early stage of the epidemic, are significant in reducing the number of infections and shortening the epidemic period. In the medical aspect, the more stringent quarantine brings the earlier inflection point of the epidemic; more importantly, improving the treatment rate significantly reduces the scale of the epidemic. In the administrative aspect, enforcing individual protection and strict community closure can effectively cut off the transmission of the virus and curb the spread of the epidemic. Finally, this research proposes several practical suggestions in response to the COVID-19 pandemic. The main contribution of this research is that the effects of different response measures on the number of new infections daily and the cumulative number of deaths of a country or region in the COVID-19 pandemic are estimated quantitatively based on modeling and simulation. (shrink)
Globalization and informatization are reshaping human life and social behaviors. The purpose is to explore the worldwide strategies to cultivate international talents with a global vision. As a global language with the largest population, English, and especially its learning effect, have always been the major concerns of scholars and educators. This work innovatively studies the combination of immersion-based English teaching with virtual reality technology. Then, based on the experimental design mode, 106 students from a Chinese school were selected for a (...) quasi-experimental study for 16 weeks. The collected data were analyzed by computer statistical software, and hypotheses are verified. The results showed that there is a significantly positive correlation between VR and immersion-based language teaching. There is a significantly positive correlation between immersion-based language teaching and academic achievement, and VR is positively correlated with learning outcome. Compared with other state-of-art research methods, this work modifies the students’ oral test through the analysis and comparison with the system database, and the students’ learning effect is greatly improved. Finally, some suggestions are put forward according to the research results to provide an experimental reference for English teachers and future linguistics teaching. (shrink)
An increasing number of the renowned company’s investors are turning attention to stock prediction in the search for new efficient ways of hypothesizing about markets through the application of behavioral finance. Accordingly, research on stock prediction is becoming a popular direction in academia and industry. In this study, the goal is to establish a model for predicting stock price movement through knowledge graph from the financial news of the renowned companies. In contrast to traditional methods of stock prediction, our approach (...) considers the effects of event tuple characteristics on stocks on the basis of knowledge graph and deep learning. The proposed model and other feature selection models were used to perform feature extraction on the websites of Thomson Reuters and Cable News Network. Numerous experiments were conducted to derive evidence of the effectiveness of knowledge graph embedding for classification tasks in stock prediction. A comparison of the average accuracy with which the same feature combinations were extracted over six stocks indicated that the proposed method achieves better performance than that exhibited by an approach that uses only stock data, a bag-of-words method, and convolutional neural network. Our work highlights the usefulness of knowledge graph in implementing business activities and helping practitioners and managers make business decisions. (shrink)
Epilepsy is a neurological disease, and the location of a lesion before neurosurgery or invasive intracranial electroencephalography surgery using intracranial electrodes is often very challenging. The high-frequency oscillation mode in MEG signal can now be used to detect lesions. Due to the time-consuming and error-prone operation of HFOs detection, an automatic HFOs detector with high accuracy is very necessary in modern medicine. Therefore, an optimized capsule neural network was used, and a MEG HFOs detector based on MEGNet was proposed to (...) facilitate the clinical detection of HFOs. To the best of our knowledge, this is the first time that a neural network has been used to detect HFOs in MEG. After optimized configuration, the accuracy, precision, recall, and F1-score of the proposed detector reached 94%, 95%, 94%, and 94%, which were better than other classical machine learning models. In addition, we used the k-fold cross-validation scheme to test the performance consistency of the model. The distribution of various performance indicators shows that our model is robust. (shrink)
This research empirically analyzes the psychological impact of smog pollution on investors. Results indicate that smog pollution has negative impact on investor sentiment which is weakened by the positive tone in media reporting. Empirical evidence for the impact of smog pollution on investor sentiment and the related moderating role of media tone is presented in this study.
Inhibitory control training is a promising method to improve individual performance of inhibitory control. Recent studies have suggested transcutaneous vagus nerve stimulation as a novel approach to affect cognitive function owing to its ability to modulate the locus coeruleus-noradrenaline system. To examine the synergistic effects of combining ICT with tVNS, 58 young males in college were randomly assigned to four groups: ICT + tVNS, ICT + sham tVNS, sham ICT + tVNS, and sham ICT + sham tVNS. Participants were instructed (...) to complete three sessions that comprised pre-training tests, a training session, and post-training tests sequentially. Results showed that the ICT + tVNS group significantly improved training and near-transfer effects on the stop-signal and Go/No-go tasks, and these effects were larger than those of the other groups. However, none of the groups exhibited the far-transfer effect on the color-word Stroop task. These results suggest that tVNS augments the intervention effects of training and similar inhibition tasks to achieve the synergistic effect; however, it does not modulate the effects of non-training tasks and obtain the far-transfer effect. ICT combined with tVNS may be a valuable intervention for improving IC in healthy individuals in certain industries and offers novel research ideas for using tVNS for cognitive improvement. (shrink)
Researchers have suggested that infants exhibiting baby schema are considered cute. These similar studies have mainly focused on changes in overall baby schema facial features. However, whether a change in only eye size affects the perception of cuteness across different facial expressions and ages has not been explicitly evaluated until now. In the present study, a paired comparison method and 7-point scale were used to investigate the effects of eye size on perceived cuteness across facial expressions and ages. The results (...) show that stimuli with large eyes were perceived to be cuter than both unmanipulated eyes and small eyes across all facial expressions and age groups. This suggests not only that the effect of baby schema on cuteness is based on changes in a set of features but also that eye size as an individual feature can affect the perception of cuteness. (shrink)