Pootle migration

Hi *,

Please CC me on any replies as I'm not on the list.

Keeping this, so others will follow the request as well.

Mon, 4 Apr 2011 22:28:43 +0200
                             From:
Christian Lohmaier
Again, as you apparently still don't understand what I already wrote many
times:
* Adding more resources will /NOT/ make pootle run faster than it does
now.
The VM already has way more resources assigned than necessary. It is
/idle/ almost all the time.

As far as I know the server hasn't really been used yet, so I guess
we'll be collecting data from now on to see how things go.

Also from the avialble data from the old pootle server. But yes, the
new one doesn't have much data yet.

During the
setup of the server, we make tradeoffs between performance and memory
use. If there is no memory available, we'll obviously try to optimise at
all cost for minimising memory use,

Please explain those settings. Which settings, what is the effect, how
to see the effect (i.e. what UI actions to perform)

and that is what I understand that
Rimas said: things might be slower than necessary, since we are not
optimising for performance, but for memory use.

No, and I repeat again: Memory is not an issue. The memory leak is.

* The only thing that is slow (when executed the first time) is
generation of the zips. So when you as translator request a zip: Don't
click the link multiple times because you don't immediately get the
zip. It can take 10 seconds for the files to be generated. Again:
* Adding more resources will /not/ make that time shorter. It is a
single-threaded process that can only uses one single CPU, thus
assigning more CPUs won't help at all (the VM has 4CPUs assigned
already)
Requesting that same zip another time (or different zips of the
project belonging to the same language is fast/instant, but requesting
the zip for another language again may take some seconds for the first
request (or again after the files did change in between).
* Pootle has a memory leak when creating the zips. It won't release
memory after processing the files.
This would be the only time where the assigned resources may run out
(the VM has 1GB or RAM assigned): Multiple different languages request
the zip at the same time. Then memory usage increases, memory runs out
and either it is crawling along or the process gets killed.

Some stuff that is slow to load is cached for later use.

"Some stuff" maybe, but that little is not what I'm talking about.

This is done
for performance optimisation. This is one of the reasons you won't see
the memory use go down immediately after generating a ZIP file.

No, this has nothing to do with caching. It is a memory leak. Rimas is
very good at not forwarding relevant information it seems. So here I
just copy and paste what I wrote to Rimas already.

Latest news! Pootle is also leaking disk space, cause now we're out of this resource too.

This is easily solvable (and not sure whether you can call it "leaking"
It's definitely using more than was expected (more than 8GB of
po-files on disk, and almost 4GB of mysql database)

But if people would have told beforehand that the values on the
dedicated pootle server are not representative ( <3GB for www-dir/pos
and < 1GB of mysqldb), then appropriate disk-space could have been
assigned beforehand.

I'm currently moving stuff pootle over to an additional virtual disk though...

ciao
Christian

2011.04.06 13:34, Christian Lohmaier rašė:

Latest news! Pootle is also leaking disk space, cause now we're out of this
resource too.

This is easily solvable (and not sure whether you can call it "leaking"
It's definitely using more than was expected (more than 8GB of
po-files on disk, and almost 4GB of mysql database)

But if people would have told beforehand that the values on the
dedicated pootle server are not representative (<3GB for www-dir/pos
and< 1GB of mysqldb), then appropriate disk-space could have been
assigned beforehand.

I'm currently moving stuff pootle over to an additional virtual disk though...

Thanks, please ping us when you're done.

By the way, maybe we should take this discussion of the l10n list? I don't think many localizers are interested in technical difficulties we're facing.

Rimas

Hi Rimas, *,

2011.04.06 13:34, Christian Lohmaier rašė:

[...]
I'm currently moving stuff pootle over to an additional virtual disk
though...

Thanks, please ping us when you're done.

Done.

By the way, maybe we should take this discussion of the l10n list? I don't
think many localizers are interested in technical difficulties we're facing.

Well, I want to keep it on a list - and the l10n list seems just
suitable for it. While the localizers might not be interested in every
technical detail, having the discussion on list saves them the
question "what is wrong with pootle?" or "is pootle down? When can we
work with pootle" etc.

Having it all off-list basically results in more work in the end. (or
more annoyed users, since they try in vain to do some work, and don't
know the reason for it.

I as a user of a webservice would like to know why it is not
working/when I can expect it to be online again...

ciao
Christian

Hi,

2011.04.06 14:27, Christian Lohmaier rašė:

[...]
I'm currently moving stuff pootle over to an additional virtual disk
though...

Thanks, please ping us when you're done.

Done.

Thanks again.

By the way, maybe we should take this discussion of the l10n list? I don't
think many localizers are interested in technical difficulties we're facing.

<...>

I as a user of a webservice would like to know why it is not
working/when I can expect it to be online again...

Well, I wrote to the list yesterday that it won't work for a while and to stay tuned for further announcements. I think this should be enough. :wink:

Rimas

Hi *

Op Wo, 2011-04-06 om 10:50 +0200 skryf Christian Lohmaier:

I find your responses rude and unhelpful.

Sorry for that. I'm just tired of writing the same stuff over and over again.

Somehow we manage to provide this software on hundreds of sites running
fine, ranging from hosting on a few hundred megabytes of RAM to machines
with several gigabytes. All accidents?

No, on the contrary, especially because it is run on so many servers
with limited resources, I'm not willing to waste RAM on pootle-VM
alone, when pootle doesn't need it at all.
RAM assigned to the VM is not available to the host or other VMs, even
if the RAM is not used inside the VM.

I could show you the specific lines of code caching big objects
(expected to be tens to hundreds of megabytes in size), but I guess that
won't convince you either. It is a leak, because someone who (I guess)
never looked at the Pootle code, says so.

I clearly explained why I'm convinced that it is a memory leak. It
doesn't free the memory, even when the machine is idling for hours. It
doesn't need that memory, since when the worker is replaced by a fresh
one, the functionality still works.

And You're top-posting and fullquoting. This (again from my long
email/mailinglist experience) emprically is a sign that people don't
actually read what was written, don't answer the questions, and
basically discuss different topics all the way, leading to repetition
over and over again.

Why can't you at least start
by assuming that I might have an idea of how Pootle works?

That's not the point. Please take your time to actually read the post.

When you are willing to discuss things under the good faith assumption
that I _might_ not be talking nonsense, we can continue the
conversation. I've been trying to help you in my free time based on my
experience, but it seems you'd prefer to assume I don't know what I'm
talking about.

Well - All I heard so far, and that was not from you personally, but
mainly via Rimas was just guesswork, no facts. And that guesswork is
contradicting hard numbers, real benchmarks. And I just hate when I
have to write the same thing over and over again. It surely did have
an impact on the tone of my response, apologies for that. But that
doesn't change anything.

In the meanwhile, here is some recommended reading:
http://effbot.org/pyfaq/why-doesnt-python-release-the-memory-when-i-delete-a-large-object.htm

Sorry, but the information content of that article is near zero
compared to what has been written here already. It contains a hint for
programmers regarding the "builtin memory leak" when working with
large number of integers or floats. I also don't care whether it is
"python itself" or the allocator that doesn't give back memory, or
that memory cannot be returned because of memory fragmentation. In the
end it makes the system unusable. Not returning memory is a memory
leak, no matter whether it is by design or not. If you cannot free
memory, you have to spawn a childprocess and dismiss that child to
keep a runnable system. It is /not/ acceptable to accumulate more and
more memory and just say "but it is not my fault, it is the memory
allocators that don't return the memory to the system"
It's not a few kBs here and there we're talking about, but tens to
hundreds of MB.

In my definition when a (long-living) process doesn't return unneeded
memory. that process is leaking memory. Whether it is by design or not
doesn't matter to me.
* It consumes more and more memory (it would no problem if creating
the zips for another language would just reuse the memory that was
allocated when creating the zip for the first language)
* It doesn't release that memory when idle (it would also be no
problem if that memory would be returned to the system after a while -
now it has to be enforced by restarting the server-process itself)
* It doesn't release the memory when the machine runs out of available
memory (thus people cry "the machine needs more RAM", but there is no
need, as it can be easily circumvented)
* the allocated memory is not needed to perform any operation after
the memory has been used once. (it doesn't accelerate anything, having
that memory or not allocated does not make any difference at all to
subsequent requests. It is just blocking system resources - as is
obvious by replacing the process with a fresh one)

Again:
* I don't have a problem with pootle taking half a day to import the
files for a new release (one time thing, no big deal)
* I don't have a problem with pootle taking very long for the very
first hit of the frontpage after a reboot (system is not rebooted that
often, also no big deal)
* I don't have a problem with restarting pootle server-processes to
workaround the memory-leak (whether you call it leak or not, I
definitely call it leak). It limits the amount of concurrent worker
processes, but that is not a problem since even with just the two as
in the VM, you can easily serve 50 concurrent requests per second
while benchmarking (resulting in being able to handle about 200
request/second of the frontpage, thus much reserves available. No
problem)
* I don't have a problem with you
* I don't have a problem with Rimas (or anyone else here)

* I see a problem in pootle not having a queue or other limitation on
the "create zips" case. It is very easy to take a server out for at
least a couple of minutes by requesting the zips of a huge project for
multiple projects at once. 8 CPU system with 8 workers → just request
8 different languages and all workers will be using 100% each for
multiple minutes, additionally fighting a little over disk-i/o and
will not be available for other stuff, and the preparations are slow
(let alone the waste of memory resulting from not releasing it again).

* I have a problem with top-post & fullquote. As (with a very high
confidence) this is a clear sign of the person not reading the post,
not trying to understand the post
* I have a problem with having to write the same stuff over and over
again. I get angry when I have to, and my tone might then no longer be
100% appropriate.

ciao
Christian

Op Wo, 2011-04-06 om 10:50 +0200 skryf Christian Lohmaier:

Hi *,

Hi Christian

I find your responses rude and unhelpful. I'll try to assume that
communication isn't going smoothly since we might both be using our
second language.

Somehow we manage to provide this software on hundreds of sites running
fine, ranging from hosting on a few hundred megabytes of RAM to machines
with several gigabytes. All accidents?

I could show you the specific lines of code caching big objects
(expected to be tens to hundreds of megabytes in size), but I guess that
won't convince you either. It is a leak, because someone who (I guess)
never looked at the Pootle code, says so. Why can't you at least start
by assuming that I might have an idea of how Pootle works? I don't have
time to explain each optimisation and feature we have in the code. My
time is limited, and I was hoping we can work together instead of giving
lectures about programming and system administration.

When you are willing to discuss things under the good faith assumption
that I _might_ not be talking nonsense, we can continue the
conversation. I've been trying to help you in my free time based on my
experience, but it seems you'd prefer to assume I don't know what I'm
talking about.

With mutual respect, we can take this forward, but not without it. And
that includes respect for the hard work that Rimas is doing.

In the meanwhile, here is some recommended reading:
http://effbot.org/pyfaq/why-doesnt-python-release-the-memory-when-i-delete-a-large-object.htm

Keep well
Friedel

Hi Christian, *

No, on the contrary, especially because it is run on so many servers
with limited resources, I'm not willing to waste RAM on pootle-VM
alone, when pootle doesn't need it at all.

Please leave the technical discussion aside for a minute.

What *we* as a team of localizers need is a working and performant
setup to do our translation work. All data that you retreaved from
the brazilian pootle server are just peanuts and not relevant for
what we need to have in the next few weeks for several reasons:

- we had no full localization on the server (but we will have to provide
this for 3.4 localization)

- hardly any team did full translation on the server, we "only" did
bugfixes

There were some good reasons to have a rather good equipped server for
pootle. We knew, that pootle might be not the perfect solution regarding
memory usage, multithreading ... That we now cannot use the initialy
planned setup is very unfortunate. But we still need a system that
provides similar performance.

I clearly explained why I'm convinced that it is a memory leak. It
doesn't free the memory, even when the machine is idling for hours. It
doesn't need that memory, since when the worker is replaced by a fresh
one, the functionality still works.

No - it is just that this is totally irrelevant in the current situation.
Even if pootle has a memory leak, we won't fix this within the next few
days. But we need to start translating asap!

So - if there is any way to provide more ressources (memory) we should do
this and analyze the root cause later (I'm sure, pootle developers are
interested to help with this).

Again:
* I don't have a problem with pootle taking half a day to import the
files for a new release (one time thing, no big deal)
* I don't have a problem with pootle taking very long for the very
first hit of the frontpage after a reboot (system is not rebooted that
often, also no big deal)
* I don't have a problem with restarting pootle server-processes to
workaround the memory-leak (whether you call it leak or not, I
definitely call it leak). It limits the amount of concurrent worker
processes, but that is not a problem since even with just the two as
in the VM, you can easily serve 50 concurrent requests per second
while benchmarking (resulting in being able to handle about 200
request/second of the frontpage, thus much reserves available. No
problem)

Maybe you don't have, but the translators will have a problem with this.
So either we can provide more ressources to the VM or we need a
different solution.

All other discussion is very academic at the moment.

regards,

André

Hi Andre, *,

Please leave the technical discussion aside for a minute.

No, cannot do that, as this is directly related to your point:

What *we* as a team of localizers need is a working and performant
setup to do our translation work.

Yes, and I fully agree, and that of course is also my highest measuremen/goal.

It is my true belief that the current setup will handle this just fine.

- we had no full localization on the server (but we will have to provide
this for 3.4 localization)

See the other messages. What is timeconsuming (and no memory will help
here, as it is is CPU bound) is importing of the new data, and time
until it is ready after a server-restart. Nothing to do about it,
apart from the admins actually doing the import using differently
formed comments that make use of all of the assigned CPUs.

- hardly any team did full translation on the server, we "only" did
bugfixes

See my post. I'm /sure/ the server can handle the full
online-translation within pootle.
I'm sure that the server will not get more than 10 apache requests per
seconds on average, and the server can handle about 200

There were some good reasons to have a rather good equipped server for
pootle.

A memory leak/wasteful setup is not a good reason for this.

We knew, that pootle might be not the perfect solution regarding
memory usage, multithreading ... That we now cannot use the initialy
planned setup is very unfortunate. But we still need a system that
provides similar performance.

Again: If performance is not satisfactory for translation work, I'll
reconsider. But again: I don't see any evidence that performance could
be increased by assigning more RAM to the VM.

I clearly explained why I'm convinced that it is a memory leak. It
doesn't free the memory, even when the machine is idling for hours. It
doesn't need that memory, since when the worker is replaced by a fresh
one, the functionality still works.

No - it is just that this is totally irrelevant in the current situation.

No it is not.

Even if pootle has a memory leak, we won't fix this within the next few
days. But we need to start translating asap!

Again: It is /perfeclty working/ when restarting the worker threads.
And the translator will not notice that the worker-thread has been
replaced. The translator will not notice whether the few milliseconds
he has to wait are
* because the keep-alive expired and a new http-connection has to be negotiated
* a server process expired and thus is restarted
* seeking through the actual file for next/previous match took a little longer.

So - if there is any way to provide more ressources (memory) we should do
this and analyze the root cause later (I'm sure, pootle developers are
interested to help with this).

/IF/ there are memory related problems, I'll assign more RAM. But
again: everything I experienced so far says: More memory won't help at
all.

Again:
* I don't have a problem with pootle taking half a day to import the
files for a new release (one time thing, no big deal)
* I don't have a problem with pootle taking very long for the very
first hit of the frontpage after a reboot (system is not rebooted that
often, also no big deal)
* I don't have a problem with restarting pootle server-processes to
workaround the memory-leak (whether you call it leak or not, I
definitely call it leak). It limits the amount of concurrent worker
processes, but that is not a problem since even with just the two as
in the VM, you can easily serve 50 concurrent requests per second
while benchmarking (resulting in being able to handle about 200
request/second of the frontpage, thus much reserves available. No
problem)

Maybe you don't have, but the translators will have a problem with this.
So either we can provide more ressources to the VM or we need a
different solution.

Again: Those stuff that takes ages are /NOT/ solvable by assigning
more ressources to the VM. Those are CPU-bound, and all of them are
single-threaded.
If the process maxes out a single CPU, that is as much speed as you
can get, no matter how many RAM is sitting idle.

* Pootle has a very, very poor first-start performance.
→ not a problem as the server will not be rebooted every other day.
And in case this wasn't clear: Restarting the worker-processes will
/not/ have the same effect, the user will not notice anything,

* Pootle has poor performance when generating the zips for download
(fist trigger per language and project)
→ This again is CPU bound, and again: More RAM will not help.

This is the only case where the user can have a problem (and is part
of the problem).
It doesn't help to click multiple download links on one page to get
the zips faster, on the contrary. Click one, wait for the processing
to be done, and when you got the first one, feel free to download all
the remaining ones.
When multiple users request huge zips at the same time, all server
processes are busy. Currently there are two, can be increased to 4,
but that's not a big difference. It is CPU bound (and also does a
little disk i/o) and I repeat myself once again: MORE RAM WILL NOT
HELP!!!

If you want a dedicated pootle server, than order a 64 core system.
With that, you can hide pootle's weak spots. It will be idle 99% of
the time, but for the point of time where 10+ people get the idea to
download the zips at the same time, you won't run out of CPUs.

All other discussion is very academic at the moment.

No - it is just very tiring that people still stick with their guesswork.

Again: The only problem is creation of zips for download. This takes
(in terms of web-responsiveness) /ages/, and blocks the server's
process while it is performed.
It is CPU-bound, and thus the amount of workers one can have is very limited.
Pootle must be reconfigured to limiit these actions, or (more simple
to implement as a workaround just create all the zips once per day in
a cronjob (or better distributed during the day, as creating them all
at once would again create the CPU-bottleneck) and only allow to
download those. Thus you don't get 100%up-to-date versions of the
file.
But when you download the zips, you're not interested in the current
state anyway, as just after you download someone else could have
edited in pootle thus creating the same issue.

Again: * Don't request additional zips of a project before the first
one is delivered to you.

ciao
Christian

Hi all,

Pootle is now unlocked, and you can start working on libo34x_ui and libo34x_help projects. As usual, the word counts vary a bit, especially in libo34x_help. Ignore that – the actual numbers should be the same (checked them in command line), with a little exception of fr/libo34x_ui, which Andras will take a look at this evening.

Best regards,
Rimas

Hi Sophie,

2011.04.07. 16:23 keltezéssel, Rimas Kudelis írta:

Pootle is now unlocked, and you can start working on libo34x_ui and
libo34x_help projects. As usual, the word counts vary a bit, especially
in libo34x_help. Ignore that – the actual numbers should be the same
(checked them in command line), with a little exception of
fr/libo34x_ui, which Andras will take a look at this evening.

I found corrupted lines in formula\source\core\resource.po. I fixed
them, so your wordcount is now the same as other's.

Please note however, that there are the following differences between
LibreOffice source and formula\source\core\resource.po:

Pootle:
#:
core_resource.src#RID_STRLIST_FUNCTION_NAMES.SC_OPCODE_BAHTTEXT.string.text
msgid "BAHTTEXT"
msgstr "BAHTTEXT"

Source:
#:
core_resource.src#RID_STRLIST_FUNCTION_NAMES.SC_OPCODE_BAHTTEXT.string.text
msgid "BAHTTEXT"
msgstr "BAHTTEXTE"

Pootle:
#:
core_resource.src#RID_STRLIST_FUNCTION_NAMES.SC_OPCODE_CHISQ_INV.string.text
msgid "CHISQINV"
msgstr "KHIDEUX.INVERSE"

Source:
#:
core_resource.src#RID_STRLIST_FUNCTION_NAMES.SC_OPCODE_CHISQ_INV.string.text
msgid "CHISQINV"
msgstr "LOI.KHIDEUX.INVERSE"

Pootle:
#:
core_resource.src#RID_STRLIST_FUNCTION_NAMES.SC_OPCODE_TABLE_OP.string.text
msgid "MULTIPLE.OPERATIONS"
msgstr "OPERATION.MULTIPLE"

Source:
#:
core_resource.src#RID_STRLIST_FUNCTION_NAMES.SC_OPCODE_TABLE_OP.string.text
msgid "MULTIPLE.OPERATIONS"
msgstr "OPERATIONS.MULTIPLES"

Please review these strings in Pootle.

Cheers,
Andras

Hi *,

What *we* as a team of localizers need is a working and performant
setup to do our translation work.

Yes, and I fully agree, and that of course is also my highest measuremen/goal.

I enabled caching of static content (images, the javascript, css), so
while that has no impact on the speed of the server itself, it has
huge impact on the feel of the site for users, as now the huge part of
a request doesn't have to be re-downloaded each time (the html is less
than 20% of a typical page request). Now the browser only needs to
download the html and can use the other files from its cache.

[...]
* Pootle has poor performance when generating the zips for download
(fist trigger per language and project)
→ This again is CPU bound, and again: More RAM will not help.
[...]
When multiple users request huge zips at the same time, all server
processes are busy. Currently there are two, can be increased to 4,
but that's not a big difference. It is CPU bound (and also does a
little disk i/o) and I repeat myself once again: MORE RAM WILL NOT
HELP!!!

As this is the real (and only) problem with the server (a few users
reuqest the zips and thus block the workers that then cannot handle
other requests until generating the zips is finished) I kept thinking
about it and the easiest solution seems to just seperate the two, i.e.
create a seperate WSGIDaemonProcess group and use that for the
export-requests.

Thus only the exporters can run out - but then only the others users
wanting the zips will have to wait, those who just want to
review/translate in pootle itself can continue their work.

Thus especially to Friedel and Dwayne and Rimas: Is there a problem with adding

WSGIDaemonProcess pootle-export threads=3 stack-size=1048576
maximum-requests=5 inactivity-timeout=1200 display-name=%{GROUP}

<Location /*/*/export/zip>
    WSGIProcessGroup pootle-export
</Location>

to the vhost's config? In turn the process lifetime of the regular
threads can then be increased again. How many of the export workers
can be allowed then is more a question of stress concurrent ones add
to the i/o (be it mysql related or raw disk i/o) and less of memory
wastage.

ciao
Christian

Hi Andras,

Hi Sophie,

2011.04.07. 16:23 keltezéssel, Rimas Kudelis írta:

Pootle is now unlocked, and you can start working on libo34x_ui and
libo34x_help projects. As usual, the word counts vary a bit, especially
in libo34x_help. Ignore that – the actual numbers should be the same
(checked them in command line), with a little exception of
fr/libo34x_ui, which Andras will take a look at this evening.

I found corrupted lines in formula\source\core\resource.po. I fixed
them, so your wordcount is now the same as other's.

Thanks a lot, that's really great :slight_smile:

Please note however, that there are the following differences between
LibreOffice source and formula\source\core\resource.po:

Pootle:
#:
core_resource.src#RID_STRLIST_FUNCTION_NAMES.SC_OPCODE_BAHTTEXT.string.text
msgid "BAHTTEXT"
msgstr "BAHTTEXT"

Source:
#:
core_resource.src#RID_STRLIST_FUNCTION_NAMES.SC_OPCODE_BAHTTEXT.string.text
msgid "BAHTTEXT"
msgstr "BAHTTEXTE"

Pootle:
#:
core_resource.src#RID_STRLIST_FUNCTION_NAMES.SC_OPCODE_CHISQ_INV.string.text
msgid "CHISQINV"
msgstr "KHIDEUX.INVERSE"

Source:
#:
core_resource.src#RID_STRLIST_FUNCTION_NAMES.SC_OPCODE_CHISQ_INV.string.text
msgid "CHISQINV"
msgstr "LOI.KHIDEUX.INVERSE"

Pootle:
#:
core_resource.src#RID_STRLIST_FUNCTION_NAMES.SC_OPCODE_TABLE_OP.string.text
msgid "MULTIPLE.OPERATIONS"
msgstr "OPERATION.MULTIPLE"

Source:
#:
core_resource.src#RID_STRLIST_FUNCTION_NAMES.SC_OPCODE_TABLE_OP.string.text
msgid "MULTIPLE.OPERATIONS"
msgstr "OPERATIONS.MULTIPLES"

Please review these strings in Pootle.

Ok, again thanks a lot for your work on this.

Kind regards
Sophie

Hi,

2011.04.07 20:38, Christian Lohmaier rašė:

Thus especially to Friedel and Dwayne and Rimas: Is there a problem with adding

WSGIDaemonProcess pootle-export threads=3 stack-size=1048576
maximum-requests=5 inactivity-timeout=1200 display-name=%{GROUP}

<Location /*/*/export/zip>
     WSGIProcessGroup pootle-export
</Location>

to the vhost's config? In turn the process lifetime of the regular
threads can then be increased again. How many of the export workers
can be allowed then is more a question of stress concurrent ones add
to the i/o (be it mysql related or raw disk i/o) and less of memory
wastage.

I like the idea, but I'll leave it to Friedel and Dwayne to judge if it's problematic or not.

Rimas

Hi *,

2011.04.07 20:38, Christian Lohmaier rašė:

Thus especially to Friedel and Dwayne and Rimas: Is there a problem with
adding

WSGIDaemonProcess pootle-export threads=3 stack-size=1048576
maximum-requests=5 inactivity-timeout=1200 display-name=%{GROUP}

<Location /*/*/export/zip>
    WSGIProcessGroup pootle-export
</Location>

[...]
I like the idea, but I'll leave it to Friedel and Dwayne to judge if it's
problematic or not.

As there has been no reply, I just did add that (with LocationMatch
instead of Location, as it's not just lang/project/export/zip, but
<variable-length>/export/zip) and also limited the number of
concurrent export jobs to 1, (otherwise requesting zips for the same
language and project would consume ~twice the amount of time - if the
user waits until first processing is finished, and then requests the
other zip, the user will get that second zip instantly.

There don't seem to be any drawbacks, so that's the way to go - this
way regular translation will not be blocked/affected by the exporting
blocking all available worker slots or all RAM or all CPU

ciao
Christian

Hi *,

2011.04.07 20:38, Christian Lohmaier rašė:

[seperate worker for export/zip]
There don't seem to be any drawbacks, so that's the way to go - this
way regular translation will not be blocked/affected by the exporting
blocking all available worker slots or all RAM or all CPU

Unfortunately, real-usage also lets the regular workers grow insanely
(from standard of around 80/90MB to700MB) - thus I had to reduce the
lifetime of the regular worker again as well.

ciao
Christian

Hello,
I'm experiencing a horrible performance in Pootle since a couple of days.
Is not possible only affects me

- strings not translated but I can't access to them
- upload de .po is too slow (more than 2 minutos for 45KB)
- fatal performance of navegation

... Please,

Errors:

Server error!

The server encountered an internal error and was unable to complete
your request.

Error message:
Premature end of script headers: wsgi.py

If you think this is a server error, please contact the webmaster.

Error 500

translations.documentfoundation.org
Tue Apr 12 15:39:06 2011
Apache

Hello,
I'm experiencing a horrible performance in Pootle since a couple of days.
Is not possible only affects me

- strings not translated but I can't access to them

I experienced the same. Solution: on Settings page you can set how
many rows are displayed on one page. When I set it to 50 I had the
same problem. I set it to 30 and now it is fine.

- upload  de .po is too slow (more than 2 minutos for 45KB)
- fatal performance of navegation

I hope when the help update processes - which has been running in the
background since morning - finish, the site will be more responsive.

Best regards,
Andras

Hi *,

Hello,
I'm experiencing a horrible performance in Pootle since a couple of days.
Is not possible only affects me

- strings not translated but I can't access to them

I experienced the same. Solution: on Settings page you can set how
many rows are displayed on one page. When I set it to 50 I had the
same problem. I set it to 30 and now it is fine.

- upload  de .po is too slow (more than 2 minutos for 45KB)
- fatal performance of navegation

I hope when the help update processes - which has been running in the
background since morning - finish, the site will be more responsive.

The problem is that the update is not run as much in the background
than it was though, on the contrary it pretty much blocks regular
operation to to the heavy i/o load it causes.

This is partially to blame on a bad database design used by pootle,
and to the fact that the mysqldb currently uses myisam, and that locks
the whole database table, thus limits concurrency.

ciao
Christian