1

The reason why I have this problem is because I am writing a library for my future software development.

This library provides separate code support for Linux and Windows systems. In the section on file operations, due to some encoding reasons, I need to use many functions to handle the encoding of file path strings.

In the "Open File" operation, you can use the function provided by the WINAPI called CreateFile or use the FILE pointer.

So I was thinking, since I am using the Windows system, would it be better to recommend using the functions provided in the WINAPI?
But I think it's a bit troublesome about this, because if that's the case, then I have to modify all my specific code about Windows systems to the functions provided by WINAPI. Some functions actually have many functions, but they are not as portable as ordinary functions.

For example, opening a file and reading it. Here are some sample codes:

WINAPI

void test()
{
    HANDLE *handle = NULL;
    uint8_t buff[4096];

    memset(buff, 0x00, 4096);

    handle = CreateFileA("README.md", GENERIC_READ, FILE_SHARE_READ, NULL,
        OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL);
    ReadFile(handle, buff, 4096, NULL, NULL);

    printf("%s\n", buff);

    CloseHandle(handle);
}

General code

void test()
{
    FILE *fp = fopen("README.md", "rb");
    uint8_t buff[4096];

    memset(buff, 0x00, 4096);

    fread(buff, 1, 4096, fp);

    printf("%s\n", buff);

    fclose(fp);
}

Since all the functions I want can be implemented, should I use the functions provided by WINAPI?

S-N
  • 306
  • 10
  • 4
    In general you should only use WinAPI when you need to do Windows-specific things. If you can do it with portable C, do that instead. – Barmar Jun 18 '23 at 09:28
  • Add `arch` folder and create separate files for different OS. With Win32 API for Windows. By this way you will have complete set of specific features. This is in case you want to use all features of Windows/NTFS/etc. If you need only basic file I/O then `fopen` will also work. But don't forget to test with Unicode file names. – i486 Jun 18 '23 at 09:35
  • 1
    Why are you using ANSI APIs on Windows? – David Heffernan Jun 18 '23 at 09:35
  • @i486 I am considering using the functions provided by WINAPI because I need to handle string encoding, as in many cases, the `fopen` function cannot be used to open paths containing Unicode characters in Windows systems. – S-N Jun 18 '23 at 10:13
  • 2
    When doing I/O, **any** I/O, prefer Win32 over any given "Standard" abstraction. The only two reasons why you'd want to go with POSIX are: `1` Writing malware, and `2` writing I/O-bound benchmarks that must have Linux come out as the winner. – IInspectable Jun 18 '23 at 11:05
  • You can certainly open fields with Unicode file names without dropping to Win32. But it's highly ironic that you used CreafFileA in your question. Why are you calling an ANSI function??? – David Heffernan Jun 18 '23 at 13:26
  • Windows (and generally, OS-dependent APIs) gives more control. If you want that (for example, in your code, if you want some FILE_SHARING_* stuff) go for it. VC++ includes also _wfopen_s that does what you want. – Michael Chourdakis Jun 18 '23 at 13:56
  • [`fopen` function can be used to open files with Unicode paths on Windows](https://stackoverflow.com/a/68515686/995714) – phuclv Jun 18 '23 at 14:27
  • 1
    @DavidHeffernan MS has gradually recommended the "ANSI" APIs again because Windows already supports the [en_US.UTF-8 locale](https://stackoverflow.com/a/63454192/995714) – phuclv Jun 18 '23 at 14:28
  • @phuclv it's certainly promising for those versions of windows which support that – David Heffernan Jun 18 '23 at 16:35
  • It's certainly promising for folks that fall for the *"UTF-8 Everywhere"* mantra. Them folks frequently ignore that everyone is *actually* using UTF-16, as the internal character encoding: Java, Windows, .NET. Even the Internet's native language, ECMAScript, does. I'm at a loss as to how ignorant one has to be to recommend use of UTF-8 as the **internal** character encoding. – IInspectable Jun 18 '23 at 17:21
  • @IInspectable I guess your definition of "everywhere" doesn't include Linux – David Heffernan Jun 18 '23 at 19:45
  • @DavidHeffernan Linux (the kernel) has no notion of Unicode. It's using UTF-8 as an ASCII-compatible encoding, in parts. Executing Java on Linux still has Java use UTF-16. Executing ECMAScript in a browser on Linux still has ECMAScript use UTF-16 internally. I don't know what .NET Core or Mono use on Linux, but Linux is the outlier here. Out of all software, Linux really makes for a *tiny* fraction, that's not using UTF-16 internally. The *"UTF-8 Everywhere"* manifesto seems to ignore this. – IInspectable Jun 18 '23 at 20:02
  • @IInspectable my software has A and W functions in its API. Because it started in Win 9x days. It's used as a matlab extension and matlab api doesn't support utf16. So no Unicode support for my software from matlab. Recently it started having Unicode support because matlab switched up and used the utf8 locale. I didn't have to change anything. So maybe utf8 everywhere isn't practical, this feature has odd places where it is useful. – David Heffernan Jun 18 '23 at 20:55

0 Answers0