Login with ORCID

abstract

Full identifier: http://purl.org/dc/terms/abstract

Related Templates

(none)

References

Nanopublication Part Subject Predicate Object Published By Published On
links a nanopublication to its assertion http://www.nanopub.org/nschema#hasAssertion assertion
abstract
A central question concerning natural competence is why orthologs of competence genes are conserved in non-competent bacterial species, suggesting they have a role other than in transformation. Here we show that competence induction in the human pathogen Staphylococcus aureus occurs in response to ROS and host defenses that compromise bacterial respiration during infection. Bacteria cope with reduced respiration by obtaining energy through fermentation instead. Since fermentation is energetically less efficient than respiration, the energy supply must be assured by increasing the glycolytic flux. The induction of natural competence increases the rate of glycolysis in bacteria that are unable to respire via upregulation of DNA- and glucose-uptake systems. A competent-defective mutant showed no such increase in glycolysis, which negatively affects its survival in both mouse and Galleria infection models. Natural competence foster genetic variability and provides S. aureus with additional nutritional and metabolic possibilities, allowing it to proliferate during infection.
Johanna Sophie Mück
2025-06-25T12:53:52.514Z
links a nanopublication to its assertion http://www.nanopub.org/nschema#hasAssertion assertion
abstract
Symbolic approaches to artificial intelligence represent things within a domain of knowledge through physical symbols, combine symbols into symbol ex- pressions, and manipulate symbols and symbol expressionsNN through inference processes. While a large part of Data Science relies on statistics and applies statisti- cal approaches to artificial intelligence, there is an increasing potential for success- fully applying symbolic approaches as well. Symbolic representations and sym- bolic inference are close to human cognitive representations and therefore compre- hensible and interpretable; they are widely used to represent data and metadata, and their specific semantic content must be taken into account for analysis of such in- formation; and human communication largely relies on symbols, making symbolic representations a crucial part in the analysis of natural language. Here we discuss the role symbolic representations and inference can play in Data Science, high- light the research challenges from the perspective of the data scientist, and argue that symbolic methods should become a crucial component of the data scientists’ toolbox.
Tobias Kuhn
2025-05-26T10:11:16.457Z
links a nanopublication to its assertion http://www.nanopub.org/nschema#hasAssertion assertion
abstract
Symbolic approaches to artificial intelligence represent things within a domain of knowledge through physical symbols, combine symbols into symbol expressions and structures, and manipulate symbols and symbol expressions and structures through inference processes. While a large part of Data Science relies on statistics and applies statistical approaches to artificial intelligence, there is an increasing potential for successfully applying symbolic approaches as well. Sym- bolic representations and symbolic inference are close to human cognitive repre- sentations and therefore comprehensible and interpretable; they are widely used to represent data and metadata, and their specific semantic content must be taken into account for analysis of such information; and human communication largely relies on symbols, making symbolic representations a crucial part in the analysis of natu- ral language. Here we discuss the role symbolic representations and inference can play in Data Science, highlight the research challenges from the perspective of the data scientist, and argue that symbolic methods should become a crucial component of the data scientists’ toolbox.
Tobias Kuhn
2025-05-26T10:04:45.172Z
links a nanopublication to its assertion http://www.nanopub.org/nschema#hasAssertion assertion
abstract
has the abstract
Tobias Kuhn
2025-05-26T10:00:49.617Z
links a nanopublication to its assertion http://www.nanopub.org/nschema#hasAssertion assertion
abstract
Tobias Kuhn
2025-05-26T10:00:49.617Z
links a nanopublication to its assertion http://www.nanopub.org/nschema#hasAssertion assertion
abstract
Context: Systematic Reviews (SRs) are means for collecting and synthesizing evidence from the identification and analysis of relevant studies from multiple sources. To this aim, they use a well-defined methodology meant to mitigate the risks of biases and ensure repeatability for later updates. SRs, however, involve significant effort. Goal: The goal of this paper is to introduce a novel methodology that reduces the amount of manual tedious tasks involved in SRs while taking advantage of the value provided by human expertise. Method: Starting from current methodologies for SRs, we replaced the steps of keywording and data extraction with an automatic methodology for generating a domain ontology and classifying the primary studies. This methodology has been applied in the Software Engineering sub-area of Software Architecture and evaluated by human annotators. Results: The result is a novel Expert-Driven Automatic Methodology, EDAM, for assisting researchers in performing SRs. EDAM combines ontology-learning techniques and semantic technologies with the human-in-the-loop. The first (thanks to automation) fosters scalability, objectivity, reproducibility and granularity of the studies; the second allows tailoring to the specific focus of the study at hand and knowledge reuse from domain experts. We evaluated EDAM on the field of Software Architecture against six senior researchers. As a result, we found that the performance of the senior researchers in classifying papers was not statistically significantly different from EDAM. Conclusions: Thanks to automation of the less-creative steps in SRs, our methodology allows researchers to skip the tedious tasks of keywording and manually classifying primary studies, thus freeing effort for the analysis and the discussion.
Tobias Kuhn
2025-05-26T09:28:03.727Z
links a nanopublication to its assertion http://www.nanopub.org/nschema#hasAssertion assertion
abstract
Translational research applies findings from basic science to enhance human health and well-being. In translational research projects, academia and industry work together to improve healthcare, often through public-private partnerships. This “translation” is often not easy, because it means that the so-called “valley of death” will need to be crossed: many interesting findings from fundamental research do not result in new treatments, diagnostics and prevention. To cross the valley of death, fundamental researchers need to collaborate with clinical researchers and with industry so that promising results can be implemented in a product. The success of translational research projects often does not depend only on the fundamental science and the applied science, but also on the informatics needed to connect everything: the translational research informatics. This informatics, which includes data management, data stewardship and data governance, enables researchers to store and analyze their ‘big data’ in a meaningful way, and enable application in the clinic. The author has worked on the information technology infrastructure for several translational research projects in oncology for the past nine years, and presents his lessons learned in this paper in the form of ten commandments. These commandments are not only useful for the data managers, but for all involved in a translational research project. Some of the commandments deal with topics that are currently in the spotlight, such as machine readability, the FAIR Guiding Principles and the GDPR regulations. Others are mentioned less in the literature, but are just as crucial for the success of a translational research project.
Tobias Kuhn
2025-05-23T13:32:11.281Z
links a nanopublication to its assertion http://www.nanopub.org/nschema#hasAssertion assertion
abstract
This is an extended, revised version of Philipson (2017). Findability and interoperability of some PIDs, Persistent Identifers, and their compliance with the FAIR data principles are explored, where ARKs, Archival Reource Keys, were added in this version. It is suggested that the wide distribution and findability (e.g. by simple ‘googling’) on the internet may be as important for the usefulness of PIDs as the resolvability of PID URIs – Uniform Resource Identifiers. This version also includes new reasoning about why sometimes PIDs such as DOIs, Digital Object Identifiers, are not used in citations. The prevalence of phenomena such as link rot implies that URIs cannot always be trusted to be persistently resolvable. By contrast, the well distributed, but seldom directly resolvable ISBN, International Standard Book Number, has proved remarkably resilient, with far-reaching persistence, inherent structural meaning and good validatability, through fixed string-length, pattern-recognition, restricted character set and check digit. Examples of regular expressions used for validation of PIDs are supplied or referenced. The suggestion to add context and meaning to PIDs, making them “identify themselves”, through namespace prefixes and object types is more elaborate in this version. Meaning can also be inherent through structural elements, such as well defined, restricted string patterns, that at the same time make PIDs more “validatable”. Concluding this version is a generic, refined model for a PID with these properties, in which namespaces are instrumental as custodians, meaning-givers and validation schema providers. A draft example of a Schematron schema for validation of “new” PIDs in accordance with the proposed model is provided.
Tobias Kuhn
2025-05-23T13:28:50.431Z
links a nanopublication to its assertion http://www.nanopub.org/nschema#hasAssertion assertion
abstract
The increasing interest in analysing, describing, and improving the research process requires the development of new forms of scholarly data publication and analysis that integrates lessons and approaches from the field of Semantic Technologies, Science of Science, Digital Libraries, and Artificial Intelligence. This editorial summarises the content of the Special Issue on Scholarly Data Analysis (Semantics, Analytics, Visualisation), which aims to showcase some of the most interesting research efforts in the field. This issue includes an extended version of the best papers of the last two editions of the “Semantics, Analytics, Visualisation: Enhancing Scholarly Dissemination” (SAVE-SD 2017 and 2018) workshop at The Web Conference.
Tobias Kuhn
2025-05-23T13:24:40.167Z
links a nanopublication to its assertion http://www.nanopub.org/nschema#hasAssertion assertion
abstract
has the abstract
Tobias Kuhn
2025-05-23T13:16:52.346Z