University of Athens | Faculty Of Sciences | Department of Physics | Section of Nuclear & Particle Physics | Research | Nuclear Theory Artificial Intelligence On Nuclear Physics : : Pythagoras Artificial Intelligence Modeling Project

Machine learning (ML) refers to a system capable of the autonomous acquisition and integration of knowledge. This capacity to learn from experience, analytical observation, and other means, results in a system that can continuously self-improve and thereby offer increased efficiency and effectiveness.

The tasks to which learning machines(LMs) are applied tend to fall within the following broad categories:
:: Function approximation, or regression analysis, including time series prediction and modelling.
:: Classification, including pattern and sequence recognition, novelty detection and sequential decision making.
:: Data processing, including filtering, clustering, blind signal separation and compression.

The special features of LMs such as capability to learn from examples, adaptations, parallelism, robustness to noise, and fault tolerance have significant impact on both industry and science. Machine learning hardware and software techniques are very often used to solving real-world applications, while in Physical Research, have been applied in a wide range of fields, from classical physics to experimental high energy physics and theoretical nuclear astrophysics, giving us the opportunity to develop faster and more accurately approaches than traditional methodologies. Ref. [1] summarizes a number of AI applications in high energy and nuclear physics that have presented at ACAT2002 from many different experiments such as DELPHI, NA48, CDF, DØ, ALICE, CMS and XEUS. A great amount of work has been also done in medical physics applications, including DNA analysis, medical diagnosis, and bioinformatics.

References

^ U. Müller, “Artificial intelligence applications in high energy and nuclear physics”, Nuclear Instruments and Methods in Physics Research A 502 (2003) 811–814.
Full text doi:10.1016/S0168-9002(03)00607-7.

Regression Analysis Using Neural Networks

The powerful developments in the neural network (NN) field during the recent years, leed to many function approximation approaches based on diverse algorithmic and mathematical principles ^{[1-4]}. Representative paradigms include the popular multilayered perceptron (MLP) that transforms the inputs via successive nonlinear activation functions, projection pursuit regression (PPR) methods that use multiple types of nonlinear trasformation units, and radial basis function networks (RBFNs) whose approximation is based on linear combinations of nonlinear basis functions. Other examples include the group method for data handling (GMDH) and high-order neural networks (HONNs) that form polynomial approximations of the inputs, multivariate adaptive regression splines (MARSs) that generate products of basis functions using recursive partitioning of the input space, and functional link neural networks (FLNNs) that achieve simplified network architectures via artificially added nonlinearities. Various, other, more recent NN approaches for function approximation are in progress.

^
C. M. Bishop, "Neural Networks For Pattern Recognition", (Oxford Univ. Press, U.K., 1996).

^
T. Hastie, R. Tibshirani, and J. H. Friedman, "The Elements of Statistical Learning", (Springer-Verlag, N.Y., 2001).

^
S. Haykin, "Neural Networks: A Comprehensive Foundation", (Prentice-Hall, Upper Saddle River, N.J., 1999).

^
A. Webb, "Statistical Pattern Recognition", (Wiley, N.Y., 2002).