A Survey of the Models for the Inventory Problem

An Overview of Inventory Models and Management Practices

by Kawle Nabha Prakashrao*, Dr. Rahul Dwivedi,

- Published in Journal of Advances and Scholarly Researches in Allied Education, E-ISSN: 2230-7540

Volume 19, Issue No. 4, Jul 2022, Pages 564 - 568 (5)

Published by: Ignited Minds Journals


ABSTRACT

The inventory problem is a critical challenge faced by businesses across various industries. Effective management of inventory can significantly impact a company's profitability and customer satisfaction. Over the years, researchers and practitioners have developed numerous models to address this problem and improve inventory management practices. This survey aims to provide an overview of the existing models for the inventory problem and highlight their strengths, limitations, and potential applications. The survey begins by presenting the different types of inventory models, including deterministic models, stochastic models, continuous review models, periodic review models, single-item models, and multi-item models. Each model type is discussed in detail, highlighting its assumptions, mathematical formulations, and optimization techniques. The survey then examines various factors that influence inventory decisions, such as demand forecasting, lead time, stockout costs, holding costs, and ordering costs.

KEYWORD

inventory problem, models, inventory management practices, survey, types of inventory models, deterministic models, stochastic models, continuous review models, periodic review models, single-item models, multi-item models, assumptions, mathematical formulations, optimization techniques, demand forecasting, lead time, stockout costs, holding costs, ordering costs

INTRODUCTION

Typically, Operations Research methodologies are delineated as discrete models. Although it can be challenging, establishing a connection between these models has the potential to unveil their interrelatedness and enhance their comprehensibility for the user. The present investigation employs three distinct models, namely Markov Chain, Dynamic Programming, and Markov Sequential Decision Processes, to address an inventory problem that is predicated on the periodic review system. The convergence of the three models to a common (s, S) policy is demonstrated, and a numerical example is presented to illustrate this convergence.[1]

Operations Research is commonly regarded as a collection of models that are tailored to address distinct problem types. Frequently, Operations Research textbooks do not establish a connection between these models and treat them as disparate subjects. The establishment of such a linkage is imperative in order to maintain the integrity of Operations Research. The objective of the current chapter is to establish a connection between three distinct models in Operations Research, namely Markov Chain, Dynamic Programming, and Markov Sequential Decision Processes. This will be achieved by implementing each of these models to address a common inventory control problem. The chapter elucidates the process by which each of the three models yields a solution and demonstrates the equivalence of the three solutions, despite their apparent dissimilarities.[2]

THE PROBLEM AND THE (S, S) POLICY SOLUTION

Let us examine a theoretical corporation that is approximating the dispersion of demand, denoted as D, for one of the products it is manufacturing. The inquiry pertains to the likelihood of a specific demand level, denoted as "j," and its corresponding probability value, represented as "pj." The fulfillment of demand during a given period, denoted as n, can be achieved through the production of a quantity, xn, during that same period, and/or by utilizing the quantity available in inventory at the onset of period n. In the context of inventory management, a holding cost denoted by ch is associated with the storage of each unit from one period to the next, while a stockout cost represented by cu is incurred for each unit that is not available when requested, resulting in a lost sale. The production cost cg(xn), expressed as a function of the quantity produced xn, is assumed to be zero when xn equals zero and is concave for xn > 0.[3] The company's management is currently seeking to establish a process control system that can automatically generate reorder decisions based on a production policy, as no particular inventory policy has been implemented thus far δn The proposed possible production quantities {xn}. Scarf The author has demonstrated the existence of an optimal production policy, denoted as δ+, for every period n. This policy ensures that the inventory level is brought to a predetermined target level, denoted as sn+, in the event that the initial inventory position for the item is less than or equal to sn+. A salient characteristic of the problem at hand is that the cost functions, demand distribution, and potential levels of initial inventory remain consistent across all time periods. This statement suggests the presence of a stable condition wherein an optimal policy δ*(i) is determined for any given initial inventory i, regardless of the period n. Hence, our primary objective is to determine the optimal decision policy δ* that corresponds to each inventory position i and production quantity δ*(i) that effectively minimizes the overall costs of production, holding, and stockout for an infinite time horizon. The policy in question is established based on the optimal values of two variables, namely s and S, denoted as s* and S* respectively:[4]

1.1

Moreover, it is imperative to note that the values of i must always remain below the difference between S (the uppermost attainable level) and m (the minimum feasible demand).

1.2

Constraints (1.1) and (1.2) implicitly require that:

1.3

Moreover, we assume an inventory capacity restriction of K units:

1.4 THE MARKOV CHAIN MODEL

As per equation (2.2), the inventory level at the commencement of each period n is a stochastic process in discrete-time. The feasible values of this process are {0, 1, …, S-m}. The probability distribution of In is contingent upon the inventory level In -1 and not on the states traversed by the stochastic process en route to In -1, given that In is invariably equivalent to In -1 plus production minus sales.[5] For any given states i and is contingent upon the prescribed policy (s; S). Thus, it is possible to express the probabilities of transition as follows.

Given that there are no transient or periodic states within the chain and that all states are interconnected, it can be inferred that the chain is ergodic. Hence, a stationary probability distribution exists. The chain can be determined by solving the system of equations:

1.5

The functions g(s, S), h(s, S), and u(s, S) represent the anticipated production, inventory holding, and stockout expenses per period, respectively, with respect to the reorder point s and the target level S.:

The anticipated overall expense denoted by w(s, S) is equivalent to the summation of three functions (2.6) - (2.8). The determination of the optimal values s* and S* can be achieved through the minimization of the function w(s, S) = g(s, S) + h(s, S) + u(s, S), while adhering to the constraint (1.4).

1.9

The Model Of Dynamic Programming

Consider the period n as the phase and the inventory level at the commencement of period n as the state. The process undergoes a transition from state "in" to state "in + 1".:

1.10

where j belongs to the set {m, …, M} and n(in) The production quantity for period n, as determined by policy δn, is dependent on the initial inventory level.[6] Let v(in, n(in)) Let C(n) represent the anticipated aggregate cost of production, inventory holding, and stockout for a given period n, given an initial inventory of in units and a production of. n(in) units:

1.11

The aim is to reduce the anticipated overall expenses for the time intervals 1, 2, and so forth, while taking into account the initial inventory level. i1. The objective periods n, n + 1, etc. can be defined as the anticipated expenses that are incurred, assuming that a certain number of units are present in the inventory at the outset. The recurrent relationship that exists between the terms of a sequence is a fundamental concept in mathematics. fn(in) and fn+1(in+1) can be expressed as:

1.12

Dynamic programming models necessitate a finite horizon due to the fact that the computation of fn(in) in equation (1.12) is contingent upon the computation of fn+1(in+1). The recurrence relation is initiated with a final period N. The selection of a sufficiently large value for N may facilitate the attainment of a state of equilibrium in the process. During the initial periods, there exists an optimal policy denoted as δ* (i) that is applicable to all initial inventory levels i, regardless of the period n.[7] However, it is possible for the final periods to exhibit variation, as they may continue to reflect the impact of the introduction of the "dummy" last period N. The solution derived from the dynamic programming approach is.

1.13

The initial segment of equation (1.16) can be demonstrated through the same reasoning as component is acquired directly from equation (1.4).

The model under consideration is the markov sequential decision processes model.

The Markov sequential decision process is characterized as a probabilistic dynamic program with an infinite horizon. The term can also be characterized as a Markovian process that encompasses a finite set of states and incorporates an economic value framework that is linked to the transitions between these states. In this instance, the state variable will persist as the initial inventory of the given time period. Let "f delta (i)" denote the anticipated cost that arises over an infinite number of time periods, assuming that the initial state is "i" and a stationary policy denoted by "delta" is adhered to from the outset of period 1.[8]

1.17

―where v(i, (i)) is the expected cost incurred during the current period, as defined in (1.10). The horizon being infinite, f(i) will also be infinite. To cope with the problem, we can use the expected discounted total cost. We assume that a $1 paid the next period will have the same value as a cost of  dollars paid during the current period. Let V(i) be the expected discounted cost incurred during an infinite number of periods, given that, at the beginning of period 1, the state is i and stationary policy  is followed‖:

1.18

The costs were discounted to the start of period 2 and accrued from that point forward. The minimum value of V(i), that we denote by V(i), The anticipated discounted expense that arises over an infinite number of periods is contingent upon the initial state being i at the onset of period 1 and the adherence to the optimal stationary policy, denoted as δ*.[9-10]

the extensive research and development carried out in this field. The inventory models discussed in this survey provide a range of approaches to address various inventory management challenges. Deterministic models offer simple and straightforward solutions for managing inventory, while stochastic models account for uncertainties and variability in demand and lead time. Continuous review models provide real-time inventory control, ensuring timely replenishment, while periodic review models offer batch ordering strategies to optimize costs. Single-item models focus on managing inventory for individual products, while multi-item models consider the interdependencies and trade-offs between different items. Each model type has its advantages and limitations, and businesses need to carefully select the appropriate model based on their specific requirements and constraints. Recent advancements in inventory modeling, such as incorporating machine learning techniques, have the potential to enhance demand forecasting accuracy and optimize inventory decisions. Considering perishable inventory and addressing the challenges of online retailing are critical for industries with time-sensitive products and omnichannel distribution. Furthermore, integrating inventory models with other supply chain components can lead to more holistic and efficient decision-making.

REFERENCE

1. Saksena J.P. "Dynamic Programming Treatment of the Job Assignment Problem," Opsearch, Vol. 6, pp. 129-136, (1969). 2. Sarma G.V., "The Reduced Matirx Method to solve the Transportation Problem," Opsearch, Vol. 31, No. 1, pp. 48-59, (1994). 3. Seguin R., Potvin J.Y., Gendreau M., Carinic T.G. and Marcotte P., "Real-Time Decision Problems: An Operational research perspective, "J. Operat. Res. Soc., Vol. 48, pp. 162-174, (1997). 4. Shah A.R., "An Optimal Assignment of Medical Graduates To pre-registration Jobns," Operational Research, Vol. 6, pp. 393-398 (1981). 5. Sharma J.K. "Operation research theroy and Applications", Macmillan Indian Limited, Delhi. (1997). 6. Sharma J.K. and Swarup Karti, "Time-Cost Trade off in A Multi Dimensional Transportation Problem-I", Opsearch Vol. 16, No. 1, pp. 23-33, (1997). 7. Sharma S.K., "Make Best Decisions Through Operations research," Kedar Nath Ram Nath and Co., Meerut, India. (1988-89). 9. Shenoy G.V., Srivstava U.K. And Sharma S.C., "Operations research for Management, "Wiley Eastern Limited, New Delhi, (1986). 10. Song. Hai Zhou, "A fast and approximate algorithm for solving traveling salesman problems and its application" J. Huaqio Uni. Nat. Sci. Ed. 26 No. 3 pp. 231-234 Application, (2005). 11. Sonia and Rita Melhtra, "A polynomial Algrithm for a two stage time minimizing transportation problem". Opsearch. 39, No. 5&6 pp. 251-266, (2003). 12. Sr. Wild Bill, Karwan Kirk R. And Karwan Mark H., "The MultipleBottleneck Trasportation Problem", Computers Ops., Res., Vol.20, No. 3 PP. 261-274, (1993). 13. Srinivasan V. and Thompson G.L., "An Algorithm for Assigning Uses to Sources in A Special Class of Transportation Problem," Operations Research, Vol. 21, Nos. 1-6, pp. 284-295, (1973). 14. Srinivasan V. and Thompson G.T., "Alternate Formulations For Static Multi-Attribute Assignment Models," Management Science, Vol. 20, No. 2, pp. 154-158, (October 1973). 15. Subranmanyam Y.V., "Some Special Cases of AssignmentProblems, "Opsearch, Vol. 16, No. 1, pp. 45-47, (1979). 16. Sultan Alan, "Hauristic for Finding An Initial B.F.S. IN A Trans protation Problem, "Opsearch, Vol. 25, No.3, pp. 197-199, (1988). 17. Sun, Minghe, "The trasporation problem with exclusionary side constrainst and two branch and bound algorithms". European J. Oper. Res. 140 No.3 pp 629-647, (2002). 18. Sung, Ye Xin. "A contribution of solving the mult-objective assignment problems". Mobu xiton yu Shuxure 15 No. 3 pp.86-89, (2001). 19. Swain AD, Guttmann HE. "Handbook of reliability analysis with emphasis of nuclear plant applications". Technical Report NUREG/CR-1278, NUclear REGulatory Commission, Washing, DC, (1983). 20. Swarup Kanti, Gupta P.K. And Man Mohan, "Introduction To Operations research, "Sultan Chand And Sons, New Delhi, (1988). 21. Tana Hamdy A., "Operations research An Introduction," Prentic- Hall of India Private Limited, New Delhi, (1997).

Corresponding Author Kawle Nabha Prakashrao*

PhD Student, Kalinga University, Raipur (CG).