2

I have learned that secure world can protect critical data from being accessed by normal world, What I don't understand is that how do I measure the integrity of normal world from secure world.

I find some relevant work in the Samsung TZ-RKP and SierraTEE, in which they both implement a feature that could measure the integrity of normal world. But they didn't give technical details. I have two questions and I'd appreciate it very much if anyone could give me some clues.

  1. Suppose I want to see what processes are running in the normal world, do I have to use a kernel module in the normal world to help me do this? If so, how do I make sure that it has passed the right result to the secure world? To be precise, how do I check that wether the kernel has been comprimised?

  2. Suppose I have a RSA key pair and I keep the private key in the secure world. When a process request to decrypt some data, how does secure world get to know whether the request is from a legislative process? A whitelist mechanism might help, but what if the kernel in normal world has been compromised and the adversary pretend to be legislative? The secure world seems to know nothing about what is happening in the normal world.

Even if it can be sure that it is from a legislative process and it decrypts the data using the private key, the decrypted data would still be returned back to the normal world region somehow(ie. shared memory) and the decrypted data could still be leaked. So what is the point of keeping a private key in secure world?

BTW, I'm using an armv8 board.

Thanks in advance. It would be great if you could provide me with some examples.

Tgn Yang
  • 330
  • 3
  • 16

1 Answers1

0

Trust-zone is not by itself a security system. You have to engineer that. Also, there are many different types of security. For instance, you are assuming a software attack yet there are many physical attacks against a system (like I guess you describe). Something must be a trusted computing base (TCB); Ie, some code that you assume can not be compromised. A normal world kernel is probably too large to be part of the TCB, yet it can be a good first line of defence. An exploit against it is only a priveledge elevation from user to supervisor. Your TrustZone API should expect untrusted data (Ie, the normal world kernel trying buffer overflows and API mis-use, etc).

The key point here is that TZASC and other bus peripherals can grant access for the secure world to read/write normal world memory. You would have to verify MMU tables, and other data structures for the case of a full blown OS like Linux. Module loading, processes running, etc. all need verification. However, if you have a much simpler system in the normal world it may be possible to verify it. Most likely you have to settle for a portion of it. Random sampling of the PC might be a deterrent; but nothing will be fool-proof unless the normal world is proof carrying code.

  1. Suppose I want to see what processes are running in the normal world, do I have to use a kernel module in the normal world to help me do this? If so, how do I make sure that it has passed the right result to the secure world? To be precise, how do I check that whether the kernel has been comprimised?

Your secure world can contain an OS (or primitive scheduler) which will periodically check the normal world code integrity. There are hardware modules like an RTIC, etc. You can also use the TZASC to lock the kernel code to normal user (no access) and normal supervisor as read-only. comprimised is an overloaded word. At some point you must trust something. Can the private key be replicated if the normal super is compromised? You have to define your security goals. In the any sense/meaning, of course the normal world kernel can be compromised. You don't have a complete specification of its behaviour to verify from the secure world.

  1. Suppose I have a RSA key pair and I keep the private key in the secure world. When a process request to decrypt some data, how does secure world get to know whether the request is from a legislative process? A whitelist mechanism might help, but what if the kernel in normal world has been compromised and the adversary pretend to be legislative? The secure world seems to know nothing about what is happening in the normal world.

Your secure world probably has to have some co-operation from the encrypting entity. You could limit the amount of decrypts without some from of verification for instance. It seems that the most valuable thing is the private RSA key. If you allow the normal world to request decryption, then that is your issue and not Trustzone's? You have to handle this using normal mechanisms with cryptography and unknown/untrusted hosts. Is the RSA key pair global or per device? Do you support revocation, etc. It is in your system and TrustZone is only part of it.

artless noise
  • 21,212
  • 6
  • 68
  • 105
  • Thanks. Do you mean that by configuring TZASC, I can make the supervisor part of the TCB? As for the verification, I couldn't think of an effective way to do such verification. The *token* (used for verification), if stored in normal region, should be considered untrusted and thus not able to verify the identity; but if stored in secure region, the normal world can't request it without verification, and the verification requires *the token* to be present, which seems to be a contradiction here. – Tgn Yang Mar 30 '16 at 03:56
  • The linux kernel has a subsystem to measure integrity, but it seems to be based on x86 and requires a TPM module. I think the TrustZone should be more feasible than TPM, do I have to modify the kernel to achieve the same goal on arm? – Tgn Yang Mar 30 '16 at 04:07
  • You must have *secure boot*. So the details of what kind of code (or Linux specification) can be provided at boot time; you must trust that the secure world loads some normal world software properly? If so, then you can periodically check that it is un-corrupted during **RUN** time. The Linux *certificate* can be signed by some build server to show that it is *authorized*. This is what TIVO did (and people got mad about it and created GPL3, which is not Linux). – artless noise Mar 30 '16 at 05:06
  • Sorry for the delay. periodically checking the normal world seems to be an effective way, but like I said before, the attacker may pretend to be legal and send the secure world fake messages, telling the secure world that it is not compromised but in fact it already has been compromised. – Tgn Yang Mar 30 '16 at 10:25
  • Or, should I just trust the kernel module like you said in your answer – Tgn Yang Mar 30 '16 at 10:25
  • The kernel does not *send a message*; you can physically check its code. It could not execute that code or change the MMU to use other memory. As I said unless it is proof carrying code or some other formal scheme you won't know 100% that it has not been compromised. *or should I just trust the kernel...* I think the exact opposite; never trust it. – artless noise Mar 30 '16 at 12:14
  • Thanks a lot. So this is what I think now(please correct me if I'm wrong): using TTBR1, the secure os is able to access the non-secure code physically and check its integrity, and this work requires me to carefully initialize MMU as it's vital to find the non-secure memory. Through this way I can check the non-secure kernel directly in the secure world, instead of relying on the normal world itself. – Tgn Yang Mar 30 '16 at 14:52
  • Yes, I believe this is what the 'SierraTEE' is doing although I have not investigated. You can also trap when a normal world tries to access 'secure' memory which is a sure sign that something went wrong. The 'rtic' or run-time integrity check is not fool-proof ever. There are all sorts of ways to alter execution in the normal world. You can look for 'XN' mappings in the MMU to see if some other (physical) memory is able to execute secure normal code; then you have to ensure that Linux doesn't do this legitimately (probably kconfig/version related). – artless noise Mar 31 '16 at 14:39
  • Thanks! Just one last question, how could I know which physical region corresponds to the sys_call_table? The physical address would vary every time it runs as a page may be swapped. Besides, if the MMU of secure world covers parts of the code segment of normal world, is there a risk that the secure world would overwrite these pages? – Tgn Yang Mar 31 '16 at 17:19
  • Monitoring the syscall table would be something new. I am just telling you what tools TZ has. It is possible to make memory inaccessible and lock it down at boot; but then you could not monitor it. The normal world physical pages can be marked read-only from the secure world and then it would not be able to corrupt it. TZ does not deal with an MMU only physical addresses on the bus. – artless noise Apr 01 '16 at 03:46
  • Thanks a lot! I'll investigate more. – Tgn Yang Apr 01 '16 at 04:55