Home > Software design >  Why do the values change when I run this basic C program?
Why do the values change when I run this basic C program?

Time:10-10

I've started learning C, and am referencing a book and the freeCodeCamp YouTube channel. I typed out a basic C program to calculate gross salary. Usually, the program runs without a problem, but sometimes the one value I have to enter is altered in the command prompt. The code and an example of the issue are given below.

//calculate gross salary of a nondescript guy who is totally not Clark Kent
#include <stdio.h>
#include <stdlib.h>

int main()
{
    float bs, da, hra, gs;
    printf("\nEnter basic salary of guy: ");
    scanf("%f", &bs);
    da = 0.4 * bs;
    hra = 0.2 * bs;
    gs = da   hra   bs;
    printf("Basic salary of Guy = %f\n", bs);
    printf("Dearness allowance = %f\n", da);
    printf("House Rent Allowance = %f\n", hra);
    printf("Gross Salary = %f\n", gs);
    return 0;
}

Command Prompt output screen

I have no idea why it happens. I've tried other values, and like I said the programs runs totally fine then. I'm new, so it might be some sort of theoretical limitation of values I'm yet to learn, but any help would be appreciated. P.S. I'm using Code::Blocks as my IDE if it matters.

CodePudding user response:

you can try using the datatype called double instead of float as it has higher accuracy (double the precision of that of the float to be more specific).

you have to know how is float or double is represented, as the value 0.59392 is actually stored in the memory as value 0.593900024890899658203125 when using float according to IEEE 754 standard and the number 123456789 is stored as 123456792, so there are other types of decimals to solve this problem where the difference between them is as follow

  • Decimal representation gives lower accuracy but a higher range with big numbers and high accuracy when talking about small numbers, most 2 used standards are binary integer decimal (BID) and densely packed decimal (DPD)

  • float and doubles gives higher accuracy than Decimal when talking about big numbers but lower range ,they follow IEEE 754 standard

  • Fixed-Point types have the lowest range but they are the most accurate one and they are the fastest ones

but remember that only double and float are C standards while _Decimal64 or _Fract or any other datatype are compiler specific.

you can refer to any online tutorial to learn how is float and decimal represented in the memory like this small tutorial

  •  Tags:  
  • c
  • Related