[p.haul] Increate a limit for opened files for criu pre-dump and page-server

Submitted by Andrei Vagin on Nov. 13, 2017, 9:31 p.m.

Details

Message ID 20171113213117.5587-1-avagin@openvz.org
State New
Series "Increate a limit for opened files for criu pre-dump and page-server"
Headers show

Commit Message

Andrei Vagin Nov. 13, 2017, 9:31 p.m.
criu restore has to be resored with a standard limit, because the kernel
doesn't shrink fdtable, when a limit is reduced. fdtable-s are charged
to kmem, so if we run criu restore with a big limit, all restored
proccess are forked with this limit and only then they restore their
limits, but fdtable-s are allocated for the initial limit, so they eat
much more kernel memory then they have to.

https://jira.sw.ru/browse/PSBM-67194

Cc: Cyrill Gorcunov <gorcunov@gmail.com>
Cc: Pavel Vokhmyanin <pvokhmyanin@virtuozzo.com>
Signed-off-by: Andrei Vagin <avagin@openvz.org>
---
 phaul/criu_api.py | 8 ++++++++
 1 file changed, 8 insertions(+)

Patch hide | download patch | download mbox

diff --git a/phaul/criu_api.py b/phaul/criu_api.py
index 73c642a..4627d5f 100644
--- a/phaul/criu_api.py
+++ b/phaul/criu_api.py
@@ -9,6 +9,7 @@  import re
 import socket
 import subprocess
 import util
+import resource
 
 import pycriu
 
@@ -36,9 +37,16 @@  class criu_conn(object):
 		util.set_cloexec(css[1])
 		logging.info("Passing (ctl:%d, data:%d) pair to CRIU",
 					css[0].fileno(), mem_sk.fileno())
+
+                # criu uses a lot of pipes to pre-dump memory, so we need to
+                # increate a limit for opened files.
+		fileno_max = int(open("/proc/sys/fs/nr_open").read())
+		fileno_old = resource.getrlimit(resource.RLIMIT_NOFILE)
+		resource.setrlimit(resource.RLIMIT_NOFILE, (fileno_max, fileno_max))
 		self._swrk = subprocess.Popen([criu_binary,
 									"swrk", "%d" % css[0].fileno()])
 		css[0].close()
+		resource.setrlimit(resource.RLIMIT_NOFILE, fileno_old)
 		self._cs = css[1]
 		self._last_req = -1
 		self._mem_fd = mem_sk.fileno()