Change search
Refine search result
1 - 34 of 34
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Al Sabbagh, Bilal
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Cybersecurity Incident Response: A Socio-Technical Approach2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis examines the cybersecurity incident response problem using a socio-technical approach. The motivation of this work is the need to bridge the knowledge and practise gap that exists because of the increasing complexity of cybersecurity threats and our limited capability of applying cybersecurity controls necessary to adequately respond to these threats. Throughout this thesis, knowledge from Systems Theory, Soft Systems Methodology and Socio-Technical Systems is applied to examine and document the socio-technical properties of cybersecurity incident response process. The holistic modelling of cybersecurity incident response process developed concepts and methods tested to improve the socio-technical security controls and minimise the existing gap in security controls.

    The scientific enquiry of this thesis is based on pragmatism as the underpinning research philosophy.  The thesis uses a design science research approach and embeds multiple research methods to develop five artefacts (concept, model, method, framework and instantiation) outlined in nine peer-reviewed publications. The instantiated artefact embraces the knowledge developed during this research to provide a prototype for a socio-technical security information and event management system (ST-SIEM) integrated with an open source SIEM tool. The artefact relevance was validated through a panel of cybersecurity experts using a Delphi method. The Delphi method indicated the artefact can improve the efficacy of handling cybersecurity incidents.

    Download full text (pdf)
    Cybersecurity Incident Response
    Download (jpg)
    Omslagsframsida
  • 2. Benjaminsson, Simon
    et al.
    Lansner, Anders
    Stockholm University, Faculty of Science, Numerical Analysis and Computer Science (NADA). Royal Institute of Technology, Sweden.
    Nexa: A scalable neural simulator with integrated analysis2012In: Network: Computation in Neural Systems, ISSN 0954-898X, Vol. 23, no 4, p. 254-271Article in journal (Refereed)
    Abstract [en]

    Large-scale neural simulations encompass challenges in simulator design, data handling and understanding of simulation output. As the computational power of supercomputers and the size of network models increase, these challenges become even more pronounced. Here we introduce the experimental scalable neural simulator Nexa, for parallel simulation of large-scale neural network models at a high level of biological abstraction and for exploration of the simulation methods involved. It includes firing-rate models and capabilities to build networks using machine learning inspired methods for e.g. self-organization of network architecture and for structural plasticity. We show scalability up to the size of the largest machines currently available for a number of model scenarios. We further demonstrate simulator integration with online analysis and real-time visualization as scalable solutions for the data handling challenges.

  • 3. Biteus, Jonas
    et al.
    Lindgren, Tony
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Planning Flexible Maintenance for Heavy Trucks using Machine Learning Models, Constraint Programming, and Route Optimization2017In: SAE International Journal of Materials & Manufacturing, ISSN 1946-3979, E-ISSN 1946-3987, Vol. 10, no 3, p. 306-315Article in journal (Refereed)
    Abstract [en]

    Maintenance planning of trucks at Scania have previously been done using static cyclic plans with fixed sets of maintenance tasks, determined by mileage, calendar time, and some data driven physical models. Flexible maintenance have improved the maintenance program with the addition of general data driven expert rules and the ability to move sub-sets of maintenance tasks between maintenance occasions. Meanwhile, successful modelling with machine learning on big data, automatic planning using constraint programming, and route optimization are hinting on the ability to achieve even higher fleet utilization by further improvements of the flexible maintenance. The maintenance program have therefore been partitioned into its smallest parts and formulated as individual constraint rules. The overall goal is to maximize the utilization of a fleet, i.e. maximize the ability to perform transport assignments, with respect to maintenance. A sub-goal is to minimize costs for vehicle break downs and the costs for maintenance actions. The maintenance planner takes as input customer preferences and maintenance task deadlines where the existing expert rule for the component has been replaced by a predictive model. Using machine learning, operational data have been used to train a predictive random forest model that can estimate the probability that a vehicle will have a breakdown given its operational data as input. The route optimization takes predicted vehicle health into consideration when optimizing routes and assignment allocations. The random forest model satisfactory predicts failures, the maintenance planner successfully computes consistent and good maintenance plans, and the route optimizer give optimal routes within tens of seconds of operation time. The model, the maintenance planner, and the route optimizer have been integrated into a demonstrator able to highlight the usability and feasibility of the suggested approach.

  • 4.
    Giannoulis, Constantinos
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Kabilan, Vandana
    A Method for VVA Tailoring: The REVVA Generic Process Tailoring Case Study2007Conference paper (Refereed)
  • 5.
    Giannoulis, Constantinos
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Svee, Eric-Oluf
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Zdravkovic, Jelena
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Capturing Consumer Preference in System Requirements Through Business Strategy2013In: International Journal of Information System Modeling and Design, ISSN 1947-8186, E-ISSN 1947-8194, Vol. 4, no 4, p. 1-26Article in journal (Refereed)
    Abstract [en]

    A core concern within Business-IT alignment is coordinating strategic initiatives and plans with Information Systems (IS). Substantial work has been done on linking strategy to requirements for IS development, but it has usually been focused on the core value exchanges offered by the business, and thus overlooking other aspects that influence the implementation of strategy. One of these, consumer preferences, has been proven to influence the successful provisioning of the business's customer value proposition, and this study aims to establish a conceptual link between both strategy and consumer preferences to system requirements. The core contention is that reflecting consumer preferences through business strategy in system requirements allows for the development of aligned systems, and therefore systems that better support a consumer orientation. The contribution of this paper is an approach to establish such alignment, with this being accomplished through the proposal of a consumer preference meta-model mapped to a business strategy meta-model further linked to a system requirements technique. The validity of this proposal is demonstrated through a case study carried out within an institution of higher education in Sweden.

  • 6.
    Giannoulis, Constantinos
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Zdravkovic, Jelena
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    A Design Science Perspective on Business Strategy Modeling2014In: Enterprise, Business-Process and Information Systems Modeling: 15th International Conference, BPMDS 2014, 19th International Conference, EMMSAD 2014, Held at CAiSE 2014, Thessaloniki, Greece, June 16-17, 2014. Proceedings / [ed] Ilia Bider, Khaled Gaaloul, John Krogstie, Selmin Nurcan, Henderik A. Proper, Rainer Schmidt, Pnina Soffer, Springer Berlin/Heidelberg, 2014, p. 424-438Conference paper (Refereed)
    Abstract [en]

    An important topic in the modeling for IS development concerns quality of obtained models, especially when these models are to be used in global scopes, or as references. So far, a number of model quality frameworks have been established to assess relevant criteria such as completeness, clarity, modularity, or generality. In this study we take a look at how a research process contributes to the characteristics of a model produced during that process. For example: what should be observed; what research methods should be selected and how should they be applied; what kind of results should be expected; how they should be evaluated, etc. We report a result on this concern by presenting how we applied Design Science Research to model business strategy.

  • 7.
    Gurung, Ram B.
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Adapted Random Survival Forest for Histograms to Analyze NOx Sensor Failure in Heavy Trucks2019In: Machine Learning, Optimization, and Data Science: Proceedings / [ed] Giuseppe Nicosia, Prof. Panos Pardalos, Renato Umeton, Prof. Giovanni Giuffrida, Vincenzo Sciacca, Springer, 2019, p. 83-94Conference paper (Refereed)
    Abstract [en]

    In heavy duty trucks operation, important components need to be examined regularly so that any unexpected breakdowns can be prevented. Data-driven failure prediction models can be built using operational data from a large fleet of trucks. Machine learning methods such as Random Survival Forest (RSF) can be used to generate a survival model that can predict the survival probabilities of a particular component over time. Operational data from the trucks usually have many feature variables represented as histograms. Although bins of a histogram can be considered as an independent numeric variable, dependencies among the bins might exist that could be useful and neglected when bins are treated individually. Therefore, in this article, we propose extension to the standard RSF algorithm that can handle histogram variables and use it to train survival models for a NOx sensor. The trained model is compared in terms of overall error rate with the standard RSF model where bins of a histogram are treated individually as numeric features. The experiment results shows that the adapted approach outperforms the standard approach and the feature variables considered important are ranked.

  • 8.
    Gurung, Ram B.
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Lindgren, Tony
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Boström, Henrik
    An Interactive Visual Tool Enhance Understanding of Random Forest Prediction2020In: Archives of Data Science, Series A, E-ISSN 2363-9881Article in journal (Refereed)
    Abstract [en]

    Random forests are known to provide accurate predictions, but the predictions are not easy to understand. In order to provide support for understanding such predictions, an interactive visual tool has been developed. The tool can be used to manipulate selected features to explore what-if scenarios. It exploits the internal structure of decision trees in a trained forest model and presents these information as interactive plots and charts. In addition, the tool presents a simple decision rule as an explanation for the prediction. It also presents the recommendation for reassignments of feature values of the example that leads to change in the prediction to a preferred class. An evaluation of the tool was undertaken in a large truck manufacturing company, targeting a fault prediction of a selected component in trucks. A set of domain experts were invited to use the tool and provide feedback in post-task interviews. The result of this investigation suggests that the tool indeed may aid in understanding the predictions of random forest, and also allows for gaining new insights.

  • 9.
    Gurung, Ram Bahadur
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Learning Decision Trees and Random Forests from Histogram Data: An application to component failure prediction for heavy duty trucks2017Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    A large volume of data has become commonplace in many domains these days. Machine learning algorithms can be trained to look for any useful hidden patterns in such data. Sometimes, these big data might need to be summarized to make them into a manageable size, for example by using histograms, for various reasons. Traditionally, machine learning algorithms can be trained on data expressed as real numbers and/or categories but not on a complex structure such as histogram. Since machine learning algorithms that can learn from data with histograms have not been explored to a major extent, this thesis intends to further explore this domain.

    This thesis has been limited to classification algorithms, tree-based classifiers such as decision trees, and random forest in particular. Decision trees are one of the simplest and most intuitive algorithms to train. A single decision tree might not be the best algorithm in term of its predictive performance, but it can be largely enhanced by considering an ensemble of many diverse trees as a random forest. This is the reason why both algorithms were considered. So, the objective of this thesis is to investigate how one can adapt these algorithms to make them learn better on histogram data. Our proposed approach considers the use of multiple bins of a histogram simultaneously to split a node during the tree induction process. Treating bins simultaneously is expected to capture dependencies among them, which could be useful. Experimental evaluation of the proposed approaches was carried out by comparing them with the standard approach of growing a tree where a single bin is used to split a node. Accuracy and the area under the receiver operating characteristic (ROC) curve (AUC) metrics along with the average time taken to train a model were used for comparison. For experimental purposes, real-world data from a large fleet of heavy duty trucks were used to build a component-failure prediction model. These data contain information about the operation of trucks over the years, where most operational features are summarized as histograms. Experiments were performed further on the synthetically generated dataset. From the results of the experiments, it was observed that the proposed approach outperforms the standard approach in performance and compactness of the model but lags behind in terms of training time. This thesis was motivated by a real-life problem encountered in the operation of heavy duty trucks in the automotive industry while building a data driven failure-prediction model. So, all the details about collecting and cleansing the data and the challenges encountered while making the data ready for training the algorithm have been presented in detail.

    Download full text (pdf)
    fulltext
  • 10.
    Gurung, Ram Bahadur
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Random Forest for Histogram Data: An application in data-driven prognostic models for heavy-duty trucks2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Data mining and machine learning algorithms are trained on large datasets to find useful hidden patterns. These patterns can help to gain new insights and make accurate predictions. Usually, the training data is structured in a tabular format, where the rows represent the training instances and the columns represent the features of these instances. The feature values are usually real numbers and/or categories. As very large volumes of digital data are becoming available in many domains, the data is often summarized into manageable sizes for efficient handling. To aggregate data into histograms is one means to reduce the size of the data. However, traditional machine learning algorithms have a limited ability to learn from such data, and this thesis explores extensions of the algorithms to allow for more effective learning from histogram data.

    The thesis focuses on the decision tree and random forest algorithms, which are easy to understand and implement. Although, a single decision tree may not result in the highest predictive performance, one of its benefits is that it often allows for easy interpretation. By combining many such diverse trees into a random forest, the performance can be greatly enhanced, however at the cost of reduced interpretability. By first finding out how to effectively train a single decision tree from histogram data, these findings could be carried over to building robust random forests from such data. The overarching research question for the thesis is: How can the random forest algorithm be improved to learn more effectively from histogram data, and how can the resulting models be interpreted? An experimental approach was taken, under the positivist paradigm, in order to answer the question. The thesis investigates how the standard decision tree and random forest algorithms can be adapted to make them learn more accurate models from histogram data. Experimental evaluations of the proposed changes were carried out on both real world data and synthetically generated experimental data. The real world data was taken from the automotive domain, concerning the operation and maintenance of heavy-duty trucks. Component failure prediction models were built from the operational data of a large fleet of trucks, where the information about their operation over many years have been summarized as histograms. The experimental results showed that the proposed approaches were more effective than the original algorithms, which treat bins of histograms as separate features. The thesis also contributes towards the interpretability of random forests by evaluating an interactive visual tool for assisting users to understand the reasons behind the output of the models.

    Download full text (pdf)
    Random Forest for Histogram Data
    Download (jpg)
    Omslagsframsida
  • 11.
    Homem, Irvin
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Advancing Automation in Digital Forensic Investigations2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Digital Forensics is used to aid traditional preventive security mechanisms when they fail to curtail sophisticated and stealthy cybercrime events. The Digital Forensic Investigation process is largely manual in nature, or at best quasi-automated, requiring a highly skilled labour force and involving a sizeable time investment. Industry standard tools are evidence-centric, automate only a few precursory tasks (E.g. Parsing and Indexing) and have limited capabilities of integration from multiple evidence sources. Furthermore, these tools are always human-driven.

    These challenges are exacerbated in the increasingly computerized and highly networked environment of today. Volumes of digital evidence to be collected and analyzed have increased, and so has the diversity of digital evidence sources involved in a typical case. This further handicaps digital forensics practitioners, labs and law enforcement agencies, causing delays in investigations and legal systems due to backlogs of cases. Improved efficiency of the digital investigation process is needed, in terms of increasing the speed and reducing the human effort expended. This study aims at achieving this time and effort reduction, by advancing automation within the digital forensic investigation process.

    Using a Design Science research approach, artifacts are designed and developed to address these practical problems. Summarily, the requirements, and architecture of a system for automating digital investigations in highly networked environments are designed. The architecture initially focuses on automation of the identification and acquisition of digital evidence, while later versions focus on full automation and self-organization of devices for all phases of the digital investigation process. Part of the remote evidence acquisition capability of this system architecture is implemented as a proof of concept. The speed and reliability of capturing digital evidence from remote mobile devices over a client-server paradigm is evaluated. A method for the uniform representation and integration of multiple diverse evidence sources for enabling automated correlation, simple reasoning and querying is developed and tested. This method is aimed at automating the analysis phase of digital investigations. Machine Learning (ML)-based triage methods are developed and tested to evaluate the feasibility and performance of using such techniques to automate the identification of priority digital evidence fragments. Models from these ML methods are evaluated in identifying network protocols within DNS tunneled network traffic. A large dataset is also created for future research in ML-based triage for identifying suspicious processes for memory forensics.

    From an ex ante evaluation, the designed system architecture enables individual devices to participate in the entire digital investigation process, contributing their processing power towards alleviating the burden on the human analyst. Experiments show that remote evidence acquisition of mobile devices over networks is feasible, however a single-TCP-connection paradigm scales poorly. A proof of concept experiment demonstrates the viability of the automated integration, correlation and reasoning over multiple diverse evidence sources using semantic web technologies. Experimentation also shows that ML-based triage methods can enable prioritization of certain digital evidence sources, for acquisition or analysis, with up to 95% accuracy.

    The artifacts developed in this study provide concrete ways to enhance automation in the digital forensic investigation process to increase the investigation speed and reduce the amount of costly human intervention needed.

     

    Download full text (pdf)
    Advancing Automation in Digital Forensic Investigations
    Download (jpg)
    Omslagsframsida
  • 12.
    Höök, Kristina
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    A glass box approach to adaptive hypermedia1996Doctoral thesis, monograph (Other academic)
  • 13.
    Jalali, Amin
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Aspect-Oriented Business Process Management2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Separation of concerns has long been considered an effective and efficient strategy to deal with complexity in information systems.One sort of concern, like security and privacy, crosses over other concerns in a system. Such concerns are called cross-cutting concerns.As a result, the realization of these concerns is scattered through the whole system, which makes their management difficult.

    Aspect Orientation is a paradigm in information systems which aims to modularize cross-cutting concerns.This paradigm is well researched in the programming area, where many aspect-oriented programming languages have been developed, e.g., AspectJ.It has also been investigated in other areas, such as requirement engineering and service composition.In the Business Process Management (BPM) area, Aspect Oriented Business Process Modeling aims to specify how this modularization technique can support encapsulating cross-cutting concerns in process models.However, it is not clear how these models should be supported in the whole BPM lifecycle.In addition, the support for designing these models has only been limited to imperative process models that support rigid business processes.Neither has it been investigated how this modularization technique can be supported through declarative or hybrid models to support the separation of cross-cutting concerns for flexible business processes.

    Therefore, this thesis investigates how aspect orientation can be supported over the whole BPM lifecycle using imperative aspect-oriented business process models. It also investigates how declarative and hybrid aspect-oriented business process models can support the separation of cross-cutting concerns in the BPM area.This thesis has been carried out following the design science framework, and the result is presented as a set of artifacts (in the form of constructs, models, methods, and instantiations) and empirical findings.

    The artifacts support modeling, analysis, implementation/configuration, enactment, monitoring, adjustment, and mining cross-cutting concerns while supporting business processes using Business Process Management Systems. Thus, it covers the support for the management of these concerns over the whole BPM lifecycle. The use of these artifacts and their application shows that they can reduce the complexity of process models by separating different concerns.

    Download full text (pdf)
    fulltext
    Download (jpg)
    Omslagsframsida
  • 14.
    Jalali, Amin
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Foundation of Aspect Oriented Business Process Management2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Reducing the complexity in information systems is a main concern on which researchers work. Separation of concerns, also known as the principle of ‘divide and conquer’, has long time been a strategy for dealing with complexity. Two examples of the application of this principle in the area of information system design are the break out the data management into Database Management Systems(DBMSs) and the separation of the business logic from the application logic into Business Process Management Systems (BPMSs). However, separation of cross-cutting concerns from the core-concern of a business process is not yet supported in the Business Process Management (BPM) area. Aspect Oriented principle recommends such a separation. When looking into the business process, several concerns, such as security and privacy, can be identified. Therefore, a formal model that provides a foundation for enabling BPMSs to support separation of concerns in BPM area is needed. This thesis provides a formal model for dealing with separation of concerns in the BPM area. Implementing this model in BPMSs would facilitate the design and implementation of business processes with a lower level of complexity, which in turn would reduce the costs associated with BPM projects. The thesis starts with a literature review on aspect orientation both in programming and in the BPM areas. Based on this study, a list of requirements for an Aspect Oriented Service for BPMSs is compiled. Then a formal model for such a service, fulfilling a set of these requirements, is designed using Coloured Petri Nets and implemented in CPN Tools. The model is evaluated through the execution of a number of scenarios. The solution is also validated through an industrial case study. The results of the case study are presented the direction for future work outlined. The case study demonstrates that separation of concerns through aspect orientation does indeed reduce the complexity of business process models.

    Download full text (pdf)
    fulltext
  • 15.
    Jalali, Amin
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Service Oriented Modularization using Coloured Petri Nets2012Conference paper (Other (popular science, discussion, etc.))
    Abstract [en]

    Modelling service oriented systems using Coloured Petri Nets usually results in cluttered nets which are hard to understand and modify. This complexity is a result of many interactions among services. This paper presents a method for designing service oriented models using coloured petri nets.This method results us in less complex nets which could be extended easier.The validation of the method is given through demonstrating its impact on defining operational semantics of a service.

    Download full text (pdf)
    fulltext
  • 16.
    Jalali, Amin
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Supporting Enactment of Aspect Oriented Business Process Models: an approach to separate cross-cutting concerns in action2013Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Coping with complexity in Information Systems and Software Engineering is an important issue in both research and industry. One strategy to deal with this complexity is through a separation of concerns, which can result in reducing the complexity, improving the re-usability, and simplifying the evolution.Separation of concerns can be addressed through the Aspect Oriented paradigm. Although this paradigm has been well researched in the field of programming, it is still in a preliminary stage in the area of Business Process Management. While some efforts have been made to propose aspect orientation for business process modeling, it has not yet been investigated how these models should be implemented, configured, run, and adjusted.Such a gap has restrained the enactment of aspect orientated business process models in practice.Therefore, this research enables the enactment of such models to support the separation of cross-cutting concerns in the entire business process management life-cycle.It starts by defining the operational semantics for the Aspect Oriented extension of the Business Process Model and Notation.The semantics specifies how such models can be implemented and configured, which can be used as a blueprint to support the enactment of aspect oriented business process models.The semantics is implemented in the form of artifacts, which are then used in a banking case study to investigate the current modeling technique.This investigation revealed new requirements, which should be considered in aspect oriented modeling approaches.Thus, the current modeling notation has been extended to include new requirements.The extended notation has been formalized, and investigated through re-modeling the processes in the case study. The results from this investigation show the need to refine the separation rules to support the encapsulation of aspects based on different business process perspectives. Therefore, the new refinement is proposed, formalized, and implemented.The implementation is then used as a prototype to evaluate the result through a case study.

    Download full text (pdf)
    fulltext
  • 17.
    Johansson, Anna-Lena
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Logic program synthesis using schema instantiation in an interactive environment1995Doctoral thesis, monograph (Other academic)
    Abstract [en]

    The research presented herein proposes a method of program synthesis based on a recursive program schema and performed with an explicit incremental plan as the core of the synthesis. A partial prototype has been built in order to be able to actually perform syntheses according to the method. The presentation of the method is accompanied by examples of performed syntheses.

    The program schemata proposed are simple and based directly on the inductive definition of a data structure which is a basis for the program. The replacement rule for instantiating the schemata is also simple. The simple schema and the simple rule should make the method easy to understand.

    In situations when program sentences in a program are similar, meaning that there are similarities in their derivations, we would like, if feasible, to avoid constructing all the corresponding derivations. A method to decide when a definition yields analogous sentences and which also produces a substitution defining the analogy is presented. As a result we can replace a derivation by a substitution, making the onus of synthesis easier. The method has been implemented as a part of the system for interactive synthesis support.

    The synthesised programs are discussed with three logical concerns in mind as follows: partial correctness, completeness and totality. The synthesised normal programs are always logical consequences of the specification. Whenever the programs and their goals are definite the programs are always partially correct. From a study of the synthesis emerges a sufficient condition for programs that use negation to be partially correct and for definite or normal programs to be complete. Sufficient conditions for the derived relation to be total can be used to show that the program is defined for every element of the recursive set.

  • 18.
    Juell-Skielse, Gustaf
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Improving Organizational Effectiveness through Standard Application Packages and IT Services2011Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Today, the design, use and distribution of standard application packages are changing due to the emergence of service orientation. In the private sector, Enterprise Resource Planning (ERP) systems are extended to include IT services. In the public sector, standard application packages are integrated with IT services which are often referred to as e-Government. E-Government can be extended with mobile technology to add mobility to public processes, so called m-Government.

    The problem addressed in this thesis is how to improve organizational effectiveness through the use of standard application packages and IT services. The objectives are to:

    • ­Develop a model for explaining the level of adoption of extended ERP among small and medium sized companies.  
    • ­Identify implications and design patterns of business models for service oriented ERP.
    • ­Establish principles for the design of local government m-services.
    • ­Develop a method for benefits evaluation of information systems with integrated services.

    The thesis contributes with theory for analyzing, explaining and predicting how the use of standard application packages as well as IT services affects organizational effectiveness. To practice, it provides new concepts that can change the perceptions and mental models that IS-professionals, such as management consultants, use in their professional lives. In particular, it provides implications, design principles, a model and a method for the use of services in conjunction with standard application packages in public and private sector organizations.

    For future research it is suggested to investigate how service orientation affects implementation methods for standard application packages and to investigate the requirements of completely integrated e-Government on e-services, business models and back-office systems.

    Download full text (pdf)
    fulltext
  • 19.
    Juell-Skielse, Gustaf
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Enquist, Håkan
    School of Business, Economics and Law, University of Gothenburg.
    Implications of ERP as Service2011Conference paper (Refereed)
    Abstract [en]

    In this paper we present implications for using and delivering Enterprise Resource Planning as services (ERP-as-a-service). The objective is to construct a framework of opportunities and challenges for users and suppliers of ERP-as-a-service. The framework is based on a combination of literature study and field study and includes approximately 80 implications. New implications, not found in literature, were identified in the field study. Examples of new implications include: more focus on IT-value; simplified phasing of implementation and improved supplier brand. For future research it is suggested that the framework is tested in a larger setting and that implications are prioritized for certain industries and types of business models.

  • 20.
    Juell-Skielse, Gustaf
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Perjons, Erik
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    VAMEE: A Value Aware Method for Evaluating Inclusive E-Government Initiatives2011In: The Practice of Enterprise Modeling / [ed] Johannesson, P., Krogstie, J., Opdahl, A., Heidelberg: Springer , 2011, p. 97-111Chapter in book (Refereed)
    Abstract [en]

    The growing use of ICT solutions for improving the public sector has created a need for valuating e-government initiatives. A number of methods for this purpose have been developed, but they are typically restricted to analyzing the benefits and costs of only one single actor. There is, therefore, a need for methods that take a broader view and take into account entire networks of actors. This paper proposes a novel method, called VAMEE, the purpose of which is to produce a well-grounded and easily understandable valuation of an e-government initiative that takes into consideration the benefits, costs, and interrelationships of all actors concerned. The basis of the proposed method is a combination of enterprise modeling techniques, in particular goal modeling and value modeling, with an established method for cost benefit analysis (i.e. Peng). VAMEE is designed to be inclusive, easily understandable, and visual. These properties of the method will support accurate and unbiased valuations as well as improved innovation in the development of e-government initiatives.

  • 21.
    Kajko-Mattsson, Mira
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Upfront corrective maintenance at the front-end support level2005In: Managing corporate information systems evolution and maintenance / [ed] Khaled M. Khan, Yan Zhang, Hershey, PA: Idea Group Publishing, 2005, p. 75-107Chapter in book (Other academic)
    Abstract [en]

    This chapter presents the process of upfront corrective maintenance at the front-end support level. The chapter is logically divided into two parts. The first part introduces the domain of the upfront corrective maintenance process, and presents its current status practiced in the industry. It first describes the process, places it within a global virtual IT enterprise and explains its role within the enterprise. It then puts the process in the context of a total front-end support process, the process performing a multitude of diverse types of support activities. Finally, it lists the problems encountered by the front-support organisations today. The second part provides a glimpse into Corrective Maintenance Maturity Model (CM3): Upfront Maintenance, a process model specialized in upfront corrective maintenance. It describes its process phases, maturity levels, and collaboration with the CM3: Problem Management model, a problem management process model at the back-end support level. The goal of this chapter is to provide a detailed insight of the process of upfront corrective maintenance.

  • 22.
    Karokola, Geoffrey Rwezaura
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    A Framework for Securing e-Government Services: The Case of Tanzania2012Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    e-Government services are becoming one of the most important and efficient means by which governments (G) interact with businesses (B) and citizens (C). This has brought not only tremendous opportunities but also serious security challenges. Critical information assets are exposed to current and emerging security risks and threats. In the course of this study, it was learnt that e-government services are heavily guided and benchmarked by e-Government maturity models (eGMMs). However, the models lack built-in security services, technical as well as non-technical; leading to lack of strategic objectives alignment between e-government services and security services. Information security has an important role in mitigating security risks and threats posed to e-government services. Security improves quality of the services offered.

    In light of the above, the goal of this research work is to propose a framework that would facilitate government organisations to effectively offer appropriate secure e-government services. To achieve this goal, an empirical investigation was conducted in Tanzania involving six government organizations. The investigations were inter-foiled by a sequence of structural compositions resulting in a proposition of a framework for securing e-government services which integrates IT security services into eGMMs. The research work was mainly guided by a design science research approach complemented in parts by systemic-holistic and socio-technical approaches.

    The thesis contributes to the empirical and theoretical body of knowledge within the computer and systems sciences on securing e-government structures. It encompasses a new approach to secure e-government services incorporating security services into eGMMs. Also, it enhances the awareness, need and importance of security services to be an integral part of eGMMs to different groups such as researched organizations, academia, practitioners, policy and decision makers, stakeholders, and the community.

    Download full text (pdf)
    Comprehensive Summary
  • 23.
    Nilsson, Björn E.
    Stockholm University, Faculty of Social Sciences.
    On models and mappings in a data base environment: a holistic approach to data modelling1979Doctoral thesis, monograph (Other academic)
  • 24.
    Nyfjord, Jaana
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Kajko-Mattsson, Mira
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Integrating risk management with software development: state of practice2008In: Proceedings of The International MultiConference of Engineers and Computer Scientists 2008 Vol. I / [ed] S. I. Ao, Oscar Castillo, Craig Douglas, David Dagan Feng, Jeong-A Lee, Newswood Limited , 2008, p. 878-884Conference paper (Other academic)
    Abstract [en]

    In this paper, we investigate the state of practice of integrating risk management with software development in 37 software organizations. We do this by using a set of evaluation criteria covering various process integration aspects. Our results recognize that process integration in this domain is still in its infancy. There is a great need for process integration and process integration models within the industry studied.

  • 25.
    Nyfjord, Jaana
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Kajko-Mattsson, Mira
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Software risk management: practice contra standard models2008In: PROCEEDINGS OF THE SECOND INTERNATIONAL CONFERENCE ON RESEARCH CHALLENGES IN INFORMATION SCIENCE RCIS 2008 Marrakech, June 3-6, Morocco / [ed] Colette Rolland, Martine Collard, Oscar Pastor, André Flory, Jean Louis Cavarero, IEEE Computer Society Press , 2008, p. 65-70Conference paper (Other academic)
    Abstract [en]

    Little is known about the compliance of risk management models with the industrial practice and vice versa. In this paper, we compare the industrial risk management practice against a risk management model that we have synthesized from a set of current risk management models. This comparison has resulted in several discrepancies observed. As a result, this paper suggests a list of issues that need to be addressed in both the industrial and standard models.

  • 26.
    Rahman, Hasibur
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Distributed Intelligence-Assisted Autonomic Context-Information Management: A context-based approach to handling vast amounts of heterogeneous IoT data2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    As an implication of rapid growth in Internet-of-Things (IoT) data, current focus has shifted towards utilizing and analysing the data in order to make sense of the data. The aim of which is to make instantaneous, automated, and informed decisions that will drive the future IoT. This corresponds to extracting and applying knowledge from IoT data which brings both a substantial challenge and high value. Context plays an important role in reaping value from data, and is capable of countering the IoT data challenges. The management of heterogeneous contextualized data is infeasible and insufficient with the existing solutions which mandates new solutions. Research until now has mostly concentrated on providing cloud-based IoT solutions; among other issues, this promotes real-time and faster decision-making issues. In view of this, this dissertation undertakes a study of a context-based approach entitled Distributed intelligence-assisted Autonomic Context Information Management (DACIM), the purpose of which is to efficiently (i) utilize and (ii) analyse IoT data.

    To address the challenges and solutions with respect to enabling DACIM, the dissertation starts with proposing a logical-clustering approach for proper IoT data utilization. The environment that the number of Things immerse changes rapidly and becomes dynamic. To this end, self-organization has been supported by proposing self-* algorithms that resulted in 10 organized Things per second and high accuracy rate for Things joining. IoT contextualized data further requires scalable dissemination which has been addressed by a Publish/Subscribe model, and it has been shown that high publication rate and faster subscription matching are realisable. The dissertation ends with the proposal of a new approach which assists distribution of intelligence with regard to analysing context information to alleviate intelligence of things. The approach allows to bring few of the application of knowledge from the cloud to the edge; where edge based solution has been facilitated with intelligence that enables faster responses and reduced dependency on the rules by leveraging artificial intelligence techniques. To infer knowledge for different IoT applications closer to the Things, a multi-modal reasoner has been proposed which demonstrates faster response. The evaluations of the designed and developed DACIM gives promising results, which are distributed over seven publications; from this, it can be concluded that it is feasible to realize a distributed intelligence-assisted context-based approach that contribute towards autonomic context information management in the ever-expanding IoT realm.

    Download full text (pdf)
    fulltext
    Download (jpg)
    Omslagsframsida
  • 27.
    Rahman, Hasibur
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Self-Organizing Logical-Clustering Topology for Managing Distributed Context Information2015Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Internet of Things (IoT) is on the verge of experiencing a paradigm shift, the focus of which is the integration of people, services, context information, and things in the Connected Society, thus enabling Internet of Everything (IoE). Hundreds of billions of things will be connected to IoT/IoE by 2020. This massive immersion of things paves the way for sensing and analysing anything, anytime and anywhere. This everywhere computing coupled with Internet or web-enabled services have allowed access to a vast amount of distributed context information from heterogeneous sources. This enormous amount of context information will remain under-utilized if not properly managed. Therefore, this thesis proposes a new approach of logical-clustering as opposed to physical clustering aimed at enabling efficient context information management.

    However, applying this new approach requires many research challenges to be met. By adhering to a design science research method, this thesis addresses these challenges and proposes solutions to them. The thesis first outlines the architecture for realizing logical-clustering topology for which a two-tier hierarchical-distributed hash table (DHT) based system architecture and a Software Defined Networking (SDN)-like approach are utilized whereby the clustering identifications are managed on the top-level overlay (as context storage) and heterogeneous context information sources are controlled via the bottom level. The feasibility of the architecture has been proven with an ns-3 simulation tool. The next challenge is to enable scalable clustering identification dissemination, for which a distributed Publish/Subscribe (PubSub) model is developed. The massive number of immersed nodes further necessitates a dynamic self-organized system. The thesis concludes by proposing new algorithms with regard to autonomic management of IoT to bring about the self-organization. These algorithms enable to structure the logical-clustering topology in an organized way with minimal intervention from outside sources and further ensure that it evolves correctly. A distributed IoT context information-sharing platform, MediaSense, is employed and extended to prove the feasibility of the dynamic PubSub model and the correctness of self-organized algorithms and to utilize them as context storage. Promising results have provided a high number of PubSub messages per second and fast subscription matching. Self-organization further enabled logical-clustering to evolve correctly and provided results on a par with the existing MediaSense for entity joining and high discovery rates for non-concurrent entity joining.

    The increase in context information requires its proper management. Being able to cluster (i.e. filter) heterogeneous context information based on context similarity can help to avoid under-utilization of resources. This thesis presents an accumulation of work which can be comprehended as a step towards realizing the vision of logical-clustering topology.

    Download full text (pdf)
    fulltext
  • 28.
    Sotomane, Constantino
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Factors affecting the use of data mining in Mozambique: Towards a framework to facilitate the use of data mining2014Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Advances in technology have enabled organizations to collect a variety ofdata at high speed and provided the capacity to store them. As a result theamount of data available is increasing daily at a rapid rate. The data stored inorganizations hold important information to improve decision making andgain competitive advantage. To extract useful information from these hugeamounts of data, special techniques such as data mining are required. Datamining is a technique capable of extracting useful knowledge from vastamounts of data. The successful application of data mining in organizationsdepends on several factors that may vary in relation to the environment. InMozambique, these factors have never been studied. The study of the factorsaffecting the use of data mining is important to determine which aspectsrequire special attention for the success of the application of data mining.This thesis presents a study of the level of awareness and use of datamining in Mozambique and the factors affecting its use. It is a step towardsthe development of a framework to facilitate the application of data miningin Mozambique. The study is exploratory and uses multiple case studies intwo institutions in Maputo city, the capital of Mozambique, one in the areaof agriculture and the other in the field of electricity, and of Maputo citymore broadly. The study involved a combination of observations, focusgroup discussions and enquiries directed at managers and practitioners onaspects of information technology (IT) and data analysis. The results of the study reveal that the level of awareness and use of datamining in Mozambique is still very weak. Only a limited number ofprofessionals in IT are aware of the concept or its uses. The main factorsaffecting the use of data mining in Mozambique are: the quality, availabilityand integration of, access to data, skill in data mining, functional integration,alignment of IT and business, interdisciplinary learning, existence ofchampions, commitment of top management, existence of changemanagement, privacy, cost and the availability of technology. Threeapplications were developed in two real settings, which showed that thereare problems to be solved with data mining. The two examples in the area ofelectricity demonstrate how data mining is used to develop models toforecast electricity consumption and how they can enhance the estimation ofelectricity to be sold to the international market. The application in the areaof agriculture extracts associations between the characteristics of smallfarmers and the yield of maize from a socioeconomic database with hundreds of attributes. The applications provide practical examples of howdata mining can help to discover patterns that can lead to the development ofmore accurate models and find interesting associations between variables inthe dataset. The factors identified in this thesis can be used to determine thefeasibility of the implementation of data mining projects and ensure itssuccess.

    Download full text (pdf)
    fulltext
  • 29.
    Sundholm, Hillevi
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Artman, Henrik
    Ramberg, Robert
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Backdoor creativity: collaborative creativity in technology supported teams2004In: Cooperative systems design: senario-based design of collaborative systems / [ed] Francoise Darses, Rose Dieng, Carla Simone, Manuel Zacklad, Amsterdam: IOS Press, 2004, p. 99-114Conference paper (Refereed)
  • 30.
    Verhagen, Harko
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Autonomy and reasoning for natural and artificial agents2004In: Agents and computational autonomy: potential, risks, and solutions / [ed] Matthias Nickles, Michael Rovatsos, Gerhard Weiss, Berlin: Springer Berlin/Heidelberg, 2004, Vol. 2969 , p. 83-94Conference paper (Other academic)
  • 31.
    Wahlgren, Gunnar
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    A Maturity Model for Measuring Organizations Escalation Capability of IT-related Security Incidents2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    An inability to handle IT-related security incidents can have devastating effects on both organizations and society at large. The European Union Agency for Network and Information Security (ENISA) emphasizes that cyber-security incidents affecting critical information infrastructures may simultaneously create significant negative impacts for several countries, and when incidents strike, the primary business processes of many organizations may be jeopardized. For example, the Swedish civil contingencies agency, MSB, reported in 2011 that a major Swedish IT services provider caused an IT-related security incident which in turn created large operational disruptions for a number of public and private organizations in Sweden. The management of IT-related security incidents is therefore an important issue facing most organizations today. Such incidents may threaten the organization as a whole and are not purely an IT issue; when handling incidents, escalation to the correct individual or groups of individuals for decision making is very important, as the organization must react quickly. Consequently, the major research goal of this thesis is to examine if the ability of an organization to escalate IT-related security incidents can be modeled, measured and improved. To achieve this goal, an artifact that can be used within an organization to model and measure its capability to escalate IT-related security incidents was designed, implemented and tested. This artifact consists of a maturity model whose purpose is to measure the level of maturity of the various attributes identified as necessary for an organization to handle escalations. In this thesis, a design science approach is applied, and the research project is divided into three design cycles, with the artifact being gradually developed and evaluated in each cycle. Evaluations were performed via interviews with representatives of 13 different organizations, including both private and public entities, and five different surveys with 78 individual participants. The conclusions of the research are that the use of the proposed self-assessment artifact can allow organizations to predict their ability to handle the escalation of IT-related security incidents with improved certainty.

    Download full text (pdf)
    A Maturity Model for Measuring Organizations Escalation Capability of IT-related Security Incidents
    Download (jpg)
    Omslagsframsida
    Download (pdf)
    Errata
  • 32.
    Wu, Desheng D.
    et al.
    Stockholm University, Faculty of Social Sciences, Stockholm Business School. University of Toronto, Canada.
    Olson, David L.
    Luo, Cuicui
    A Decision Support Approach for Accounts Receivable Risk Management2014In: IEEE Transactions on Systems, Man, and Cybernetics: Systems, ISSN 2168-2216, Vol. 44, no 12, p. 1624-1632Article in journal (Refereed)
    Abstract [en]

    Financial disasters in private firms led to increased emphasis on various forms of risk management, to include market risk management, operational risk management, and credit risk management. Financial institutions are motivated by the need to meet increased regulatory requirements for risk measurement and capital reserves. This paper describes and demonstrates a model to support risk management of accounts receivable. We present a decision support model for a large bank enabling assessment of risk of default on the part of loan recipients. A credit scoring model is presented to assess account creditworthiness. Alternative methods of risk measurement for fault detection are compared, and a logistic regression model selected to analyze accounts receivable risk. Accuracy results of this model are presented, enabling accounts receivable managers to confidently apply statistical analysis through data mining to manage their risk.

  • 33.
    Xiao, Bin
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Contextual entity networking: Enable Dynamic and Flexible Interaction between Heterogeneous Entities in Support of Ambient Intelligent Services2015Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Ambient intelligent (AmI) service acquires, comprehends, and reacts to contextual information from user immersed surroundings to address human needs based on the interaction between AmI entities. Therefore, it is important to enable flexible and dynamic interaction between these AmI entities and to keep up with the changing user immersed surroundings. Previous research studies have proposed many techniques to support the interactions between entities. However, there are still many limits that hinder dynamic and flexible entity interactions. For example, most information exchanged between entities does not distinguish context relevance between each other so that entities are usually bothered by irrelevant information. Moreover, tight cooperation between sensors and actuators are not calibrated well; therefore, entity interaction equals sensor interaction.

    To deal with the limitations and support dynamic and flexible entity interaction, this thesis proposes an approach called contextual entity networking (CEN) to enable dynamic and flexible contextual interaction between AmI entities. Using CEN, contextual entity interaction is set upon a virtual world formed by entity mirrors and their service logics, which are enabled to explore the potentially related entities and enable the tight logical cooperation between entities from different spaces. Moreover, the information is shared only with the contextual related AmI entities.

    With the proposed technique, context messages are shared with the logically related entities, and interactions between entities obey the service logics that enable all the entities to work towards the same goal. Due to the decoupled and distributed infrastructure, entities can freely interact with any other entities in the platform, which does not impact the final consequence of context sharing. These features enhance the dynamicity and flexibility of interactions and enable the entity interactions to easily suit the changing user immersed surroundings. 

  • 34.
    Xiao, Bin
    Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
    Data-Centric Network of Things: A Method for Exploiting the Massive Amount of Heterogeneous Data of Internet of Things in Support of Services2017Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Internet of things (IoT) generates massive amount of heterogeneous data, which should be efficiently utilized to support services in different domains. Specifically, data need to be supplied to services by understanding the needs of services and by understanding the environment changes, so that necessary data can be provided efficiently but without overfeeding. However, it is still very difficult for IoT to fulfill such data supply with only the existing supports of communication, network, and infrastructure; while the most essential issues are still unaddressed, namely the heterogeneity issue, the recourse coordination issue, and the environments’ dynamicity issue. Thus, this necessitates to specifically study on those issues and to propose a method to utilize the massive amount of heterogeneous data to support services in different domains.

    This dissertation presents a novel method, called the data-centric network of things (DNT), which handles heterogeneity, coordinates resources, and understands the changing IoT entity relations in dynamic environments to supply data in support of services. As results, various services based on IoT (e.g., smart cities, smart transport, smart healthcare, smart homes, etc.) are supported by receiving enough necessary data without overfeeding.

    The contributions of the DNT to IoT and big data research are: firstly the DNT enables IoT to perceive data, resources, and the relations among IoT entities in dynamic environments. This perceptibility enhances IoT to handle the heterogeneity in different levels. Secondly, the DNT coordinates IoT edge resources to process and disseminate data based on the perceived results. This releases the big data pressure caused by centralized analytics to certain degrees. Thirdly, the DNT manages entity relations for data supply by handling the environment dynamicity. Finally, the DNT supply necessary data to satisfy different service needs, by avoiding either data-hungry or data-overfed status.

    Download full text (pdf)
    Data-Centric Network of Things
    Download (jpg)
    Omslagsframsida
1 - 34 of 34
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf