Home > database >  printf works differently for unicode and multibyte
printf works differently for unicode and multibyte

Time:10-19

I have a service which writes in a named pipe, it writes a DWORD, Its is then read by another process(both running in unicode)

When I try to receive the DWORD and convert it to a displayable string (TCHAR,char,wchar_t etc) and print it using printf in command prompt, I get uneven results with newline

HANDLE hOutput = GetStdHandle(STD_OUTPUT_HANDLE);
TCHAR szBuffer[SIZEOF_BUFFER];
DWORD dwRead;

for (;;)
{
    if (!ReadFile(hRemoteOutPipe, szBuffer, SIZEOF_BUFFER, &dwRead, NULL) ||
        dwRead == 0)
    {
        DWORD dwErr = GetLastError();
        if (dwErr == ERROR_NO_DATA)
            break;
    }

    
    szBuffer[dwRead / sizeof(TCHAR)] = _T('\0');

    // Send it to our stdout
    printf("%s",szBuffer);
    fflush(stdout);
}

CloseHandle(hRemoteOutPipe);

hRemoteOutPipe = INVALID_HANDLE_VALUE;


::ExitThread(0);

the printf works fine for multibyte, but doesnt work fine for unicode, Kindly help me out

CodePudding user response:

First off: TCHAR was an idea that made sense in 1995, not so in 2022. You get these weird errors. Secondly, you're tagging as C , but you're not using std::cout. That is really the root cause of the problem: printf doesn't understand TCHAR, you'd need to use _tprintf.

CodePudding user response:

TCHAR is a preprocessor macro that maps to either wchar_t or char depending on whether UNICODE is defined or not, respectively.

The %s placeholder expects a char* string in printf(), whereas it expects a wchar_t* string in wprintf().

Since you are using TCHAR strings, the output will not be what you expect when TCHAR is wchar_t, as that would be a type-mismatch in printf(). You would need to use _tprintf() in <tchar.h> instead, which is a preprocessor macro that maps to either wprintf() or printf() depending on UNICODE.

  • Related