Why parallel computer processing systems are preferred to serial computer processing systems: A formal discussion

Research output: Contribution to journalArticleResearchpeer-review

Abstract

Serial computer processing systems are characterized by the fact of executing software using a single central processing unit (CPU) while parallel computer processing systems simultaneously use multiple CPU's at the time. Parallel computer processing systems are an evolution of serial computer processing systems that attempt to emulate what has always been the state of affairs in the natural world: many complex, interrelated events happening at the same time, yet within a sequence. Today, commercial applications provide an equal or greater driving force in the development of faster computers. These applications require the processing of large amounts of data. Some of the arguments which have been used to say why it is better parallel than serial are: save time and/or money, solve larger problems. Besides that there are physical and economical limits to serial computer processing systems. However we would like to be more precise and give a definitive and unquestionable formal proof to justify the claim that parallel computer processing systems are better than serial processing systems. The main objective and contribution of this paper consists in using a formal and mathematical approach to prove that parallel computer processing systems are better than serial computer processing systems (better related to: saving time and/or money and being able to solve larger problems). This is achieved thanks to the theory of Lyapunov stability and max-plus algebra applied to discrete event systems modeled with time Petri nets. © 2011 Academic Publications, Ltd.
Original languageAmerican English
Pages (from-to)329-347
Number of pages294
JournalInternational Journal of Pure and Applied Mathematics
StatePublished - 24 Jun 2011

Fingerprint

Parallel Computers
Computer systems
Processing
Petri nets
Program processors
central processing units
algebra
Max-plus Algebra
Time Petri Nets
computer programs
Discrete event simulation
Formal Proof
Discrete Event Systems
Lyapunov Stability
Driving Force
Algebra
Justify
Unit
Software

Cite this

@article{ecee6c557bfb453b873c66deb4b55e63,
title = "Why parallel computer processing systems are preferred to serial computer processing systems: A formal discussion",
abstract = "Serial computer processing systems are characterized by the fact of executing software using a single central processing unit (CPU) while parallel computer processing systems simultaneously use multiple CPU's at the time. Parallel computer processing systems are an evolution of serial computer processing systems that attempt to emulate what has always been the state of affairs in the natural world: many complex, interrelated events happening at the same time, yet within a sequence. Today, commercial applications provide an equal or greater driving force in the development of faster computers. These applications require the processing of large amounts of data. Some of the arguments which have been used to say why it is better parallel than serial are: save time and/or money, solve larger problems. Besides that there are physical and economical limits to serial computer processing systems. However we would like to be more precise and give a definitive and unquestionable formal proof to justify the claim that parallel computer processing systems are better than serial processing systems. The main objective and contribution of this paper consists in using a formal and mathematical approach to prove that parallel computer processing systems are better than serial computer processing systems (better related to: saving time and/or money and being able to solve larger problems). This is achieved thanks to the theory of Lyapunov stability and max-plus algebra applied to discrete event systems modeled with time Petri nets. {\circledC} 2011 Academic Publications, Ltd.",
author = "Konigsberg, {Zvi Retchkiman}",
year = "2011",
month = "6",
day = "24",
language = "American English",
pages = "329--347",
journal = "International Journal of Pure and Applied Mathematics",
issn = "1311-8080",
publisher = "Academic Publications Ltd.",

}

TY - JOUR

T1 - Why parallel computer processing systems are preferred to serial computer processing systems: A formal discussion

AU - Konigsberg, Zvi Retchkiman

PY - 2011/6/24

Y1 - 2011/6/24

N2 - Serial computer processing systems are characterized by the fact of executing software using a single central processing unit (CPU) while parallel computer processing systems simultaneously use multiple CPU's at the time. Parallel computer processing systems are an evolution of serial computer processing systems that attempt to emulate what has always been the state of affairs in the natural world: many complex, interrelated events happening at the same time, yet within a sequence. Today, commercial applications provide an equal or greater driving force in the development of faster computers. These applications require the processing of large amounts of data. Some of the arguments which have been used to say why it is better parallel than serial are: save time and/or money, solve larger problems. Besides that there are physical and economical limits to serial computer processing systems. However we would like to be more precise and give a definitive and unquestionable formal proof to justify the claim that parallel computer processing systems are better than serial processing systems. The main objective and contribution of this paper consists in using a formal and mathematical approach to prove that parallel computer processing systems are better than serial computer processing systems (better related to: saving time and/or money and being able to solve larger problems). This is achieved thanks to the theory of Lyapunov stability and max-plus algebra applied to discrete event systems modeled with time Petri nets. © 2011 Academic Publications, Ltd.

AB - Serial computer processing systems are characterized by the fact of executing software using a single central processing unit (CPU) while parallel computer processing systems simultaneously use multiple CPU's at the time. Parallel computer processing systems are an evolution of serial computer processing systems that attempt to emulate what has always been the state of affairs in the natural world: many complex, interrelated events happening at the same time, yet within a sequence. Today, commercial applications provide an equal or greater driving force in the development of faster computers. These applications require the processing of large amounts of data. Some of the arguments which have been used to say why it is better parallel than serial are: save time and/or money, solve larger problems. Besides that there are physical and economical limits to serial computer processing systems. However we would like to be more precise and give a definitive and unquestionable formal proof to justify the claim that parallel computer processing systems are better than serial processing systems. The main objective and contribution of this paper consists in using a formal and mathematical approach to prove that parallel computer processing systems are better than serial computer processing systems (better related to: saving time and/or money and being able to solve larger problems). This is achieved thanks to the theory of Lyapunov stability and max-plus algebra applied to discrete event systems modeled with time Petri nets. © 2011 Academic Publications, Ltd.

UR - https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=79959359543&origin=inward

UR - https://www.scopus.com/inward/citedby.uri?partnerID=HzOxMe3b&scp=79959359543&origin=inward

M3 - Article

SP - 329

EP - 347

JO - International Journal of Pure and Applied Mathematics

JF - International Journal of Pure and Applied Mathematics

SN - 1311-8080

ER -