Machine Learning Optimization Using Calculus

Authors

  • Dr. Naveen Kashyap Ph.D. in Mathematics, Ambikapur, Surguja, C.G. Author

DOI:

https://doi.org/10.29070/ppmaqh42

Keywords:

Machine Learning, Calculus, Gradient Descent, Optimization, Derivatives, Artificial Intelligence, Loss Function

Abstract

Optimization of Machine Learning Through Calculus Abstract One significant field in the domain of Information Technologies is machine learning. This discipline is about the ability of the system to learn from data and perform intelligent operations. There are many tasks involved in the domain of machine learning; one of them is model optimization. Optimizing a machine learning model is the goal of this study. In particular, this work focuses on the role of calculus and derivatives in machine learning optimization. Moreover, gradient descent is regarded as a well-known technique for minimizing loss functions. This study focuses on theoretical analysis and practical examples. Results reveal that using calculus; it is easy to make machine learning models more precise.

Downloads

Download data is not yet available.

References

1. Christopher M. Bishop, Bishop, C. M. (2006). Pattern recognition and machine learning. Springer.

2. Ian Goodfellow, Yoshua Bengio, & Aaron Courville, Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT Press.

3. Jorge Nocedal & Stephen J. Wright, Nocedal, J., & Wright, S. J. (2006). Numerical optimization (2nd ed.). Springer.

4. Diederik P. Kingma & Jimmy Ba, Kingma, D. P., & Ba, J. (2015). Adam: A method for stochastic optimization. International Conference on Learning Representations (ICLR).

5. Léon Bottou, Bottou, L. (2010). Large-scale machine learning with stochastic gradient descent. Proceedings of COMPSTAT.

6. John Duchi, Elad Hazan, & Yoram Singer, Duchi, J., Hazan, E., & Singer, Y. (2011). Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12, 2121–2159.

7. Sebastian Ruder, Ruder, S. (2016). An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747.

8. Gilbert Strang, Strang, G. (2016). Introduction to linear algebra (5th ed.). Wellesley-Cambridge Press.

9. Tom M. Apostol, Apostol, T. M. (1967). Calculus, Vol. 1: One-variable calculus with an introduction to linear algebra. Wiley.

10. Abdulkadirov, A. M., & Haji, S. H. (2021). Comparison of optimization techniques based on gradient descent algorithm: A review. PalArch’s Journal of Archaeology of Egypt/Egyptology, 18(4), 2715–2743.

11. Richtárik, P., & collaborators (2021–2024). Stochastic and randomized optimization methods for large-scale machine learning. (Various Scopus-indexed journals)

12. Vu, T., & Raich, R. (2021). On asymptotic linear convergence of projected gradient descent for constrained least squares. arXiv / IEEE-accessible preprint.

13. Xie, Z., Yuan, L., Zhu, Z., & Sugiyama, M. (2021). Positive-negative momentum: Manipulating stochastic gradient noise to improve generalization. Proceedings of ICML.

14. Abdulkadirov, A., et al. (2023). Survey of optimization algorithms in modern neural networks. Mathematics (MDPI).

15. Kumar, R., et al. (2023). Sample gradient descent: A PCA-based optimization approach for machine learning. Journal of Big Data / Springer.

16. Kumar, S., & Singh, P. (2023). Analysis of gradient descent optimization on logistic regression models. International Journal of Intelligent Systems and Applications in Engineering.

17. Abdel Aal, O. F., Özbek, N. S., Cao, S., & Chen, Y. Q. (2025). A comprehensive survey of fractional gradient descent methods and convergence analysis. Information Sciences / Elsevier.

18. Abdel Aal, O. F., et al. (2025). 15 ways to apply fractional calculus in gradient descent optimization methods. IFAC-PapersOnLine.

Downloads

Published

2025-10-01

How to Cite

[1]
“Machine Learning Optimization Using Calculus”, JASRAE, vol. 22, no. 5, pp. 628–637, Oct. 2025, doi: 10.29070/ppmaqh42.