Reza Rasinojehdehi; Soheil Azizi
Abstract
The escalating annual insurance costs nationwide have sparked a growing interest among insurance industry managers and policymakers in analyzing insurance data to forecast future costs. Accurately predicting the number of claims and implementing appropriate policies can help mitigate potential losses ...
Read More
The escalating annual insurance costs nationwide have sparked a growing interest among insurance industry managers and policymakers in analyzing insurance data to forecast future costs. Accurately predicting the number of claims and implementing appropriate policies can help mitigate potential losses for insurance companies and customers. This study focuses on predicting the amount of customer claims and utilizes data from 128 individuals insured by Iran insurance company. The dataset includes various attributes such as the age of the vehicle owner, type of car, age of the car itself, number of claims, and the corresponding claim amounts (measured in 10,000 Tomans) recorded in the year 1400. All features, except the claim amount (the target variable), were discretized into ordinal variables to ensure accurate analysis and address any outliers or data inconsistencies. Multiple linear regression was employed to predict the target variable, enabling an investigation into the influence of each feature on estimating the claim amount. The data analysis was conducted using IBM SPSS MODELER software, allowing for a comprehensive examination of the assumptions associated with the regression model. By leveraging this approach, insurance industry stakeholders can gain valuable insights into predicting claim amounts and make informed decisions to optimize their operations and minimize potential financial risks.
Soheil Fakheri
Abstract
Analysis of big data has been presented as an advanced analytical technology involving large-scale and complex applications. In this paper, we review the general background of big data, and focus on data generation and data analysis. Then, we examine the several representative applications of big data, ...
Read More
Analysis of big data has been presented as an advanced analytical technology involving large-scale and complex applications. In this paper, we review the general background of big data, and focus on data generation and data analysis. Then, we examine the several representative applications of big data, including enterprise management, Internet of Things, online social networks. These discussions aim to provide a comprehensive overview to readers of this exciting area.
Amir Hossein Hariri; Esmaeil Bagheri; Sayyed Mohammad Reza Davoodi
Abstract
Coronary artery heart failure is the leading cause of mortality among other cardiac diseases. In most of the cases, angiography is a reliable method for the diagnosis and treatment of cardiovascular diseases. However, it is a costly approach associated with various complications. The significant increase ...
Read More
Coronary artery heart failure is the leading cause of mortality among other cardiac diseases. In most of the cases, angiography is a reliable method for the diagnosis and treatment of cardiovascular diseases. However, it is a costly approach associated with various complications. The significant increase in the prevalence of cardiovascular diseases and the subsequent complications and treatment costs have urged researchers to plan for the better examination, prevention, early detection, and effective treatment of these conditions. The present study aimed to determine the patterns of cardiovascular diseases using integrated classification techniques for analyzing the data of internal medicine patients who are at the risk of heart failure with 451 samples and 13 characteristics. Selecting characteristics and evaluating the influential factors are essential to the development of classifiers and increasing their accuracy. Therefore, we investigated the influential factors of the Gini index. In the classification phase, basic techniques were used, including a decision tree, a neural network, and different cumulative techniques such as gradient boosting, random forest, and the novel deep learning method. A comparison revealed that deep learning with the accuracy of 95.33%, disease class accuracy of 95.77%, and health class accuracy of 94.74% could enhance the presentation and results of the neural network. Out findings confirmed that cumulative methods and selecting influential factors are essential to increasing the accuracy of the diagnostic systems for heart failure. Furthermore, the reported practical tree rules emphasized the use of analytical methods to extract knowledge.