Skip to content
This repository has been archived by the owner on Jan 23, 2023. It is now read-only.

Fix out of range access in GetRecycleMemoryInfo (#26873) #26912

Merged
merged 1 commit into from
Nov 12, 2019

Conversation

303248153
Copy link

@303248153 303248153 commented Sep 27, 2019

Related to #26873

On linux, GetCurrentProcessorNumber which uses sched_getcpu() can return a value greater than the number of processors reported by sched_getaffinity with CPU_COUNT or sysconf(_SC_NPROCESSORS_ONLN). For example, taskset -c 2,3 ./MyApp will make CPU_COUNT be 2 but sched_getcpu() can return 2 or 3, and OpenVZ kernel can make sysconf(_SC_NPROCESSORS_ONLN) return a limited cpu count but sched_getcpu() still report the real processor number.

We should use GetCurrentProcessorNumber()%NumberOfProcessors for the array index of pRecycledListPerProcessor to avoid out of range access.


Also I hope it could merge to 3.0.x release so my openvz vps can upgrade .NET Core to 3.0 sooner :|

@janvorli
Copy link
Member

@303248153 thank you for finding it out. My assumption was that the CPU indexes are always smaller than sysconf(_SC_NPROCESSORS_CONF). Since it is not the case on some kernels, then we also need to fix several places in the GCToOSInterface implementations where we use the same assumption.

Could you please fix that too in this PR? Here are the places that need to be fixed:

for (size_t i = 0; i < g_totalCpuCount; i++)

for (size_t procNumber = 0; procNumber < g_totalCpuCount; procNumber++)

for (uint16_t i = 0; i < GCToOSInterface::GetTotalProcessorCount(); i++)

The change would be to use MAX_SUPPORTED_CPUS constant at all these places as the upper loop limit. It is fine at these places. The only place where it would increase the number of iterations is in the gcenv.unix.cpp#L333, but it doesn't matter these as it is a one time initialization at startup and the loop is very tight.

@303248153
Copy link
Author

I think those 3 locations are fine, the number of processors return by sched_getaffinity shoud not be greater than sysconf(_SC_NPROCESSORS_CONF), the problem is sched_getcpu can return a number not in the affinity set (when using openvz kernel), in that case GetProcessorForHeap will just return false.

Also as I know the total process number and affinity set can be changed after a process launched, for example cpu hotplug can change the total process number, taskset on linux and SetProcessAffinityMask on windows can change the affinity set, these are special case so .NET core may not support them well, but atleast it should not crash or access invalid memory.

@janvorli
Copy link
Member

@303248153 The problem at those locations is that GC would not be able to see processors with indexes higher or equal to the number it gets from sysconf(_SC_NPROCESSORS_CONF) and run GC threads / create GC heaps for those in the server GC mode. Which would cause a performance hit. And not only that. In the case you have described above when the affinity was set to CPUs 2 and 3 and the sysconf(_SC_NPROCESSORS_CONF) returned 2, the g_processAffinitySet in the gcenv.unix.cpp would be initialized to empty and GetProcessorForHeap would always fail. I am not sure what exact effect it would have on the GC.
The fixes I've suggested would make this case work correctly, as the affinity set processing would correctly consider all the CPUs in the affinity set and not just the ones with indices lower than the value reported by sysconf.

@303248153 303248153 force-pushed the 20190927-fix-GetRecycleMemoryInfo branch from 80b8eac to 6e91e54 Compare September 29, 2019 23:44
@303248153
Copy link
Author

303248153 commented Sep 30, 2019

@janvorli That's a misunderstanding cause by my inappropriate description. I mean taskset -c 2,3 ./MyApp will make sched_getaffinity(getpid(), sizeof(cpu_set_t), &cpuSet), CPU_COUNT(&cpuSet) equal to 2 but sysconf(_SC_NPROCESSORS_ONLN) still report all cores (like 4), for I test:

OpenVZ VPS (from hostbrz):

  • sched_getaffinity, CPU_COUNT returns 1
  • sysconf(_SC_NPROCESSORS_ONLN) returns 1
  • sysconf(_SC_NPROCESSORS_CONF) returns 8
  • sched_getcpu returns 0 ~ 5

KVM VPS (also from hostbrz):

  • sched_getaffinity, CPU_COUNT returns 4
  • sysconf(_SC_NPROCESSORS_ONLN) returns 4
  • sysconf(_SC_NPROCESSORS_CONF) returns 4
  • sched_getcpu returns 0 ~ 3

KVM VPS with taskset -c 2,3:

  • sched_getaffinity, CPU_COUNT returns 2
  • sysconf(_SC_NPROCESSORS_ONLN) returns 4
  • sysconf(_SC_NPROCESSORS_CONF) returns 4
  • sched_getcpu returns 2 ~ 3

pRecycledListPerProcessor will use sysconf(_SC_NPROCESSORS_ONLN) as array length and sched_getcpu as array index, so it has to do a modulo operation to avoid out of range access on OpenVZ VE. The 3 locations you give doesn't affect by this, unless somewhere use sched_getcpu as array index or assumes sched_getcpu must exists in the affinity set.

I just fix my comment for clearer description, please review that again, thanks.

@janvorli
Copy link
Member

OpenVZ VPS (from hostbrz):

sched_getaffinity, CPU_COUNT returns 1
sysconf(_SC_NPROCESSORS_CONF) returns 1
sched_getcpu returns 0 ~ 5

So you are saying that in this case, the sched_getaffinity would always return cpuSet with CPU #0 set? That sounds strange. I would expect it to return cpuSet with one CPU set, but it can be CPU 0, 1, 2, 3, 4 or 5. If I am right, the GC code needs to be fixed. Consider this case:

sched_getaffinity, CPU_COUNT returns 1
sched_getaffinity, returns set with only CPU 2 set.
sysconf(_SC_NPROCESSORS_CONF) returns 1

In this case, the GC code at the places I've commented on would look through the affinity set only for index 0, so it would never find CPU at index 2.

@303248153
Copy link
Author

@janvorli @echesakovMSFT

So you are saying that in this case, the sched_getaffinity would always return cpuSet with CPU #0 set? That sounds strange

sched_getcpu() can return a number not in the cpuSet on OpenVZ VE (see the code and output below).

I notice that coreclr will use _SC_NPROCESSORS_CONF for ARM/ARM64 and _SC_NPROCESSORS_ONLN otherwise to get the total cpu count:
79aadb8
#18289

And I did the tests again, the result surprise me:

#define _GNU_SOURCE
#include <sched.h>
#include <unistd.h>
#include <stdio.h>

int main() {
	int nproc = sysconf(_SC_NPROCESSORS_CONF);
	int nprocOnline = sysconf(_SC_NPROCESSORS_ONLN);
	printf("nproc: %d\n", nproc);
	printf("nprocOnline: %d\n", nprocOnline);
	while (1) {
		cpu_set_t cpuSet;
		int st = sched_getaffinity(getpid(), sizeof(cpu_set_t), &cpuSet);
		printf("affinity: ");
		for (int x = 0; x < nproc; ++x) {
			printf("%d, ", CPU_ISSET(x, &cpuSet));
		}

		printf("cpuid: %d\n", sched_getcpu());
		sleep(1);
	}
	return 0;
}

Output on OpenVZ VPS:

ubuntu@zkweb:~/tmp$ ./a.out
nproc: 8
nprocOnline: 1
affinity: 1, 0, 0, 0, 0, 0, 0, 0, cpuid: 2
affinity: 1, 0, 0, 0, 0, 0, 0, 0, cpuid: 2
affinity: 1, 0, 0, 0, 0, 0, 0, 0, cpuid: 2
affinity: 1, 0, 0, 0, 0, 0, 0, 0, cpuid: 2
affinity: 1, 0, 0, 0, 0, 0, 0, 0, cpuid: 2
affinity: 1, 0, 0, 0, 0, 0, 0, 0, cpuid: 5
affinity: 1, 0, 0, 0, 0, 0, 0, 0, cpuid: 5

Output on KVM VPS:

nproc: 4
nprocOnline: 4
affinity: 1, 1, 1, 1, cpuid: 1
affinity: 1, 1, 1, 1, cpuid: 1
affinity: 1, 1, 1, 1, cpuid: 1
affinity: 1, 1, 1, 1, cpuid: 1
affinity: 1, 1, 1, 1, cpuid: 1

Looks like on OpenVZ VE, _SC_NPROCESSORS_CONF reports all cpu cores but _SC_NPROCESSORS_ONLN only report the limited value, so use _SC_NPROCESSORS_CONF on all platforms will also fix this issue.

@echesakovMSFT Could you tell me why not use _SC_NPROCESSORS_CONF on x86?

@janvorli Should I fix those 3 locations even sched_getcpu() may not in the affinity set?

@echesakov
Copy link

@303248153 As far as I remember, before my changes in #18053 sysconf(_SC_NPROCESSORS_ONLN) was used on all platforms to determine the number of available logical processors.

This caused issues on ARM/ARM64 where sysconf(_SC_NPROCESSORS_ONLN) can return 1 at the beginning of your process startup if all other cores were "sleeping" and the runtime will use the return value to make some decisions (e.g. what type of write barriers to use - single-threaded or multi-threaded). Then later the same call to sysconf(_SC_NPROCESSORS_ONLN) can report any arbitrary number between 1 and actual number of logical processors depending on the load. As you can see, this would cause hard to find failures.

We decided in #18053 to use sysconf(_SC_NPROCESSORS_CONF) instead. However, this could cause another type of issues when running in Docker container with - it would ignore the limitations set by --cpuset-cpus - I believe this was discussed in #10690.

So later in #18289 I changed X86/AMD64 to use sysconf(_SC_NPROCESSORS_ONLN) as it was before #18053 and use sysconf(_SC_NPROCESSORS_CONF) on ARM/ARM64.

@303248153
Copy link
Author

@echesakovMSFT Thanks for your explanation!

As I test, _SC_NPROCESSORS_ONLN still reports all cores when using --cpuset-cpus inside docker:

ubuntu@brz:~/tmp$ docker run -it --rm -v$(pwd):/app --cpuset-cpus 2,3 ubuntu:18.04 bash
root@07c1419d636c:/# cd /app
root@07c1419d636c:/app# ./a.out
nproc: 4
nprocOnline: 4
affinity: 0, 0, 1, 1, cpuid: 2
affinity: 0, 0, 1, 1, cpuid: 2
affinity: 0, 0, 1, 1, cpuid: 2

There two type of cpu count, the total cpu count (PAL_GetTotalCpuCount) and the logical cpu count (PAL_GetLogicalCpuCountFromOS). I think we can use _SC_NPROCESSORS_CONF for getting total cpu count on all platforms, because the logical cpu count already calculated from sched_getaffinity which works well inside docker with --cpuset-cpus options.

ref: https://github.com/dotnet/coreclr/blob/master/src/pal/src/misc/sysinfo.cpp

@janvorli
Copy link
Member

janvorli commented Oct 1, 2019

@303248153 thank you for the testing source code. The fact that the affinity has nothing to do with the processor we get from sched_getcpu on OpenVZ VPS is really surprising. I want to do some googling to see if that's some weird quirk or if my understanding of the affinity set is wrong.
As for the _SC_NPROCESSORS_CONF vs _ONLINE, it also seems to me that it would make sense to switch to using _CONF everywhere, but I'll need to spend a little time to understand all the consequences of such change.
I'll get back to you here soon.

@janvorli
Copy link
Member

janvorli commented Oct 2, 2019

I was baffled by the weird affinity set reported on OpenVZ util I've found this:
https://bugs.openvz.org/browse/OVZ-6046
Basically, what OpenVZ patched kernel does is that it reports affinity set as a set with _SC_NPROCESSORS_ONLN bits set starting at bit 0. It has no correspondence to the actual indexes of the CPUs running (reported by sched_getcpu()).

So we can keep your change as-is without modifying the places in GC that I've mentioned, since it is clear that the bits set in the affinity are always in the range from 0.._SC_NPROCESSORS_ONLN-1 and thus everything will be processed correctly.

@303248153
Copy link
Author

@janvorli Thanks for your investigation! There one question remain:

Should we use _SC_NPROCESSORS_CONF everywhere?
sched_getcpu() can be greater than _SC_NPROCESSORS_ONLN on OpenVZ but still less than _SC_NPROCESSORS_CONF as I test, use _SC_NPROCESSORS_CONF everywhere could eliminate potential problems, and it doesn't affect the active (logical) cpu count calculated from sched_getaffinity (gc_heap::n_heaps will remain unchanged).

Also I check the usage of GetCurrentProcessorNumber in gc code, heap_select::init_cpu_mapping and heap_select::select_heap will use sched_getcpu() as array index, but the array length is MAX_SUPPORTED_CPUS and by default it will return heap 0, so it should be safe.

@303248153 303248153 force-pushed the 20190927-fix-GetRecycleMemoryInfo branch from 6e91e54 to 3461971 Compare October 3, 2019 00:16
@janvorli
Copy link
Member

janvorli commented Oct 3, 2019

As for the _SC_NPROCESSORS_CONF vs _ONLINE, I'd create a separate issue for investigating and possibly making such change. Your fix is good as is.

@303248153
Copy link
Author

Glad to hear that, now I just wait for merge.
Hope to see it apply to 3.0.x ~ 3.1.

@jkotas jkotas added the area-PAL label Nov 2, 2019
@JustArchi
Copy link

.NET Core 3.0 causes a regression on OpenVZ setups compared to 2.2 release, I hope this can be merged to 3.0 as well.

@maryamariyan
Copy link
Member

Thank you for your contribution. As announced in dotnet/coreclr#27549 this repository will be moving to dotnet/runtime on November 13. If you would like to continue working on this PR after this date, the easiest way to move the change to dotnet/runtime is:

  1. In your coreclr repository clone, create patch by running git format-patch origin
  2. In your runtime repository clone, apply the patch by running git apply --directory src/coreclr <path to the patch created in step 1>

@janvorli janvorli closed this Nov 8, 2019
@janvorli janvorli reopened this Nov 8, 2019
@janvorli
Copy link
Member

/azp run coreclr-ci

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@janvorli janvorli merged commit 2c2bbb5 into dotnet:master Nov 12, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants