Home > Back-end >  Why do I get "argument of type "LPCSTR" is incompatible with parameter of type "
Why do I get "argument of type "LPCSTR" is incompatible with parameter of type "

Time:06-04

I am working on the following function:

int initSerialPort(HANDLE* hSerialPort, LPCSTR portName){
    *hSerialPort = CreateFile(
        portName,
        GENERIC_READ | GENERIC_WRITE,
        0,
        0,
        OPEN_EXISTING,
        FILE_ATTRIBUTE_NORMAL,
        0
    );
....
}

However, I get a red error mark under the "portName" variable with the error message

argument of type "LPCSTR" is incompatible with parameter of type "LPCWSTR"

However, despite this error, the code compiles and runs as expected. I am currently passing in an argument as follows:

LPCSTR portName = "COM1";
initSerialPort(&hSerialPort, portName);

Furthermore, when I try to use type LPCWSTR instead, the code does not compile. When I instead change the parameter to LPCWSTR and initialize the argument like this:

LPCWSTR portName = L"COM5";
initSerialPort(&hSerialPort, portName);

I no longer see the red error squiggly, however when I try to compile this I get the following error

.\test.cpp:28:17: error: cannot convert 'LPCWSTR' {aka 'const wchar_t*'} to 'LPCSTR' {aka 'const char*'}
   28 |                 portName,
      |                 ^~~~~~~~
      |                 |
      |                 LPCWSTR {aka const wchar_t*}

What the heck is going on?

CodePudding user response:

Basically, CreateFile() isn't a function, it's a macro, that will select either the function CreateFileA() or CreateFileW() based on the selected charset on your compiler. Most Windows APIs that take strings as parameters are made that way.

The error is because, although you're explicitly using simple 8-bit char strings, your compiler is set to assume 16-bit char strings as default, so that macro will pick CreateFileW(), which takes LPCWSTR instead of LPCSTR.

To fix this you can pick one of the solutions:

  1. Explicitly call CreateFileA() instead of CreateFile()
  2. Change the default char width on your project
  3. Use 16-bit char strings

You could also make use of the macros TCHAR and _T() to handle your strings, to make your project automatically use 8-bit or 16-bit chars depending on what option is selected on the compiler, though this could require making substantial changes throughout the entire project.

CodePudding user response:

The Windows API has Wide and ASCII versions of most of its API functions. These have suffixes A and W. Depending on the macro UNICODE being defined or not, the suffixless function names are defined to either the ASCII or Wide versions, e.g. CreateFile is actually defined as a macro to be either CreateFileA or CreateFileW.

It seems you IDE defines the UNICODE macro when producing its in-editor underlining etc., while your compiler is not getting this define when you're building your project.

All this can be avoided by just calling the suffixed versions directly and not bothering with the UNICODE define at all.

Windows 11 (and maybe even 10?) has a registry key to let the ASCII variants handle UTF-8 properly, which of course isn't really ideal as a developer, as you don't control the registry on your users' systems. I'd always call the W versions directly, and handle conversion to UTF-16 wide strings when calling the Windows API, or if not writing cross-platform code, just use wchar_t/wstring directly everywhere.

  • Related