#include<stdio.h>
#include<string.h>
void printlen(char *s,char *t){
int c=0;
int len=((strlen(s)-strlen(t)>c)?strlen(s):strlen(t));
printf("%d",len);
}
void main(){
char *x="abc";
char *y="defgh";
printlen(x,y);
}
strlen(s)
is 3 and strlen(t)
is 5, so why is strlen(s)-strlen(t)>c
evaluating to true?
CodePudding user response:
The strlen
function returns a value of type size_t
which is unsigned.
So subtracting unsigned 3 from unsigned 5 results in a very large unsigned number. This number is greater than 0 so the condition is true, causing strlen(s)
to be evaluated and assigned to len
.
The result of the subtraction should be cast to int
to properly store a signed value.
int len=(((int)(strlen(s)-strlen(t))>c)?strlen(s):strlen(t));
CodePudding user response:
int len=((strlen(s)-strlen(t)>c)?strlen(s):strlen(t));
printf("%d",len);
is equal:
int len;
if((strlen(s) - strlen(t) > c)
len = strlen(s);
else
len = strlen(t);
printf("%d",len);
Firstly, printf("%d",len);
is unconditional. Whatever len(s) - strlen(t) > c
is true
or false
, printf("%d",len);
get executed.
Second, strlen
return size_t
, which is an alias for unsigned int
, so (strlen(s) - strlen(t) > c)
is always true
.
You should typecast it to an int
int len=(((int)strlen(s)-strlen(t)>c)?strlen(s):strlen(t));