Engaging users: initial results of the survey

Hi Charles,

Next time I'm sure you can join us in the weeks during which we
discussed the survey on the marketing and project's list :slight_smile:

Gladly, but the main reason I'm not on, or only rarely follow, the
marketing list, is because of what I see as the zealotist atmosphere
that tends to reign there. I have nothing against being keen to support
and promote a product, but I draw the line at losing all objectivity.
Unfortunately, that is how I perceive the marketing list to function.

- penetration of the product;

I honestly would not think there's relevant data for this in the
survey and from the respondents.

Hmm, yes, I realize that, but neither was that my intention in that
particular statement. What I meant is that this survey appears not to
have been promoted in such a way as to maximise the number of returns. I
can imagine a number of reasons for this, but I wonder if you remember,
back in the old days of Sun StarOffice, when Sun ran a user-oriented
survey that was linked to the installation (or post-installation /
start-up / one month's use) of the product ?

Although this might have seemed invasive to many at the time (I really
don't know), I actually feel that this is quite a good idea to borrow
from, much in the same way that the download page now links you to
donations to LibreOffice, perhaps it would be possible to organise
future surveys via a similar mechanism ? In other words, make it so
that, say, after a period of one month from the initial installation of
LO, that the user be directed to a web page to participate in a survey
relating to the usage or desiderata of the product (from the user's
perspective, of course).

- reach of the survey;

Good question with no easy answer. The survey was localized in 5
languages aside English. The link was posted here and on the several
other users mailing lists. The word was spread on the Facebook
LibreOffice page and Google+ and to a lesser extent on Twitter.

Yes, and it felt to me that people who were already on the mailing lists
would more likely be inclined to attempt to respond to the survey
anyway, since that means of communication was used foremost (it is how I
found out about the existence of the survey). Certainly, that seems to
have been the behavioural response on this list. Again, this would be
considered normal behaviour for people who are already on the project
mailing lists and occasionally like to have a say in, or just follow,
contributions from others. "Preaching to the converted", I believe the
French say.

- the survey could have had a bigger and much deeper outreach if it
  had been pushed directly to the users, say at the installation phase
  or even through a mechanism allowing users to respond to it via the
  StartCenter. That was obviously not the case, so in the end we
  reached out the users who are on the project's mailing list and
  connected to us through our social networks. This leaves out plenty
  of users irrespective of their language.

Yes, I understand, hence my suggestion above to think back to how Sun
went about handling a similar situation.

- design of the survey;

What would you like to know? The survey was designed in order to be
progressive in its questioning as should be all the surveys. Beyond
that, don't look too much into survey methodologies, I'm not sure they
are that sophisticated, unless of course you would like to get a
particular answer in advance, and that's precisely what we wanted to
avoid.

Nonetheless, as others here have indicated, it did seem that the
questions were biased towards a particular goal, i.e. showing that the
website or the project's communication methods were not quite there yet,
or that the project hadn't managed to foster the required "community
impetus" due to a failure in one or more areas.

- length and time for which the survey ran ?

The survey started on the 31st of October and expired yesterday.

Thanks for taking the time to answer my different points Charles, I
appreciate it.

Alex

Thanks for your input Dan!

Best,

Charles.

Hi Charles,

> Next time I'm sure you can join us in the weeks during which we
> discussed the survey on the marketing and project's list :slight_smile:
>>

Gladly, but the main reason I'm not on, or only rarely follow, the
marketing list, is because of what I see as the zealotist atmosphere
that tends to reign there. I have nothing against being keen to
support and promote a product, but I draw the line at losing all
objectivity. Unfortunately, that is how I perceive the marketing list
to function.

Well, I truly hope we're changing that perception and that the
marketing list can be a place for collaborative work :slight_smile:

>>
>> - penetration of the product;
>
> I honestly would not think there's relevant data for this in the
> survey and from the respondents.

Hmm, yes, I realize that, but neither was that my intention in that
particular statement. What I meant is that this survey appears not to
have been promoted in such a way as to maximise the number of
returns. I can imagine a number of reasons for this, but I wonder if
you remember, back in the old days of Sun StarOffice, when Sun ran a
user-oriented survey that was linked to the installation (or
post-installation / start-up / one month's use) of the product ?

Although this might have seemed invasive to many at the time (I really
don't know), I actually feel that this is quite a good idea to borrow
from, much in the same way that the download page now links you to
donations to LibreOffice, perhaps it would be possible to organise
future surveys via a similar mechanism ? In other words, make it so
that, say, after a period of one month from the initial installation
of LO, that the user be directed to a web page to participate in a
survey relating to the usage or desiderata of the product (from the
user's perspective, of course).

I do remember it well, but it sure came with lots of criticism too
(much more than the present one).

>
>> - reach of the survey;
>
> Good question with no easy answer. The survey was localized in 5
> languages aside English. The link was posted here and on the several
> other users mailing lists. The word was spread on the Facebook
> LibreOffice page and Google+ and to a lesser extent on Twitter.

Yes, and it felt to me that people who were already on the mailing
lists would more likely be inclined to attempt to respond to the
survey anyway, since that means of communication was used foremost
(it is how I found out about the existence of the survey). Certainly,
that seems to have been the behavioural response on this list. Again,
this would be considered normal behaviour for people who are already
on the project mailing lists and occasionally like to have a say in,
or just follow, contributions from others. "Preaching to the
converted", I believe the French say.

> - the survey could have had a bigger and much deeper outreach if it
> had been pushed directly to the users, say at the installation
> phase or even through a mechanism allowing users to respond to it
> via the StartCenter. That was obviously not the case, so in the end
> we reached out the users who are on the project's mailing list and
> connected to us through our social networks. This leaves out
> plenty of users irrespective of their language.

Yes, I understand, hence my suggestion above to think back to how Sun
went about handling a similar situation.

My honest answer would be: not enough resources.

>
>
>> - design of the survey;
>
> What would you like to know? The survey was designed in order to be
> progressive in its questioning as should be all the surveys. Beyond
> that, don't look too much into survey methodologies, I'm not sure
> they are that sophisticated, unless of course you would like to get
> a particular answer in advance, and that's precisely what we
> wanted to avoid.

Nonetheless, as others here have indicated, it did seem that the
questions were biased towards a particular goal, i.e. showing that the
website or the project's communication methods were not quite there
yet, or that the project hadn't managed to foster the required
"community impetus" due to a failure in one or more areas.

The questions were biased not because we wanted people to tell us
something we wanted to hear, but because the survey stems from
discussions where an analysis was drawn, namely, we don't engage users
in our community and we don't talk to them enough. In this sense, it is
biased because the questions were framed around this thinking. But to
take only one example, the answers could have been something like a
majority telling us they don't have the time and another large chunk
telling us they already donated money. And that would not have led to
the same conclusion and would ultimately have invalidated the thinking.

>
>> - length and time for which the survey ran ?
>
> The survey started on the 31st of October and expired yesterday.

Thanks for taking the time to answer my different points Charles, I
appreciate it.

Thank you for your feedback!

Hi :slight_smile:
If a troll gets banned they usually just find another new name or even
go so far as to set-up a new free email account

The only ways that seem to work are
1. To let them know their behaviour is being (possibly) misconstrued
as unhelpful and unfriendly
2. avoid taking any notice of them when they are being bad (ie
"don;t feed the troll")

Obviously mixing 2 with 1 won't work so we try 1 first and then stick
with 2. That is why no-one is responding to your request. So now
Urmas knows he can stir-up a reaction out of 1 person by using bad
language but he/she has seen that no-one else reacts. So Urmas will
probably move on to other tactics such as using lots of capital
letters to simulate shouting and trying to find topics that "get under
people's skin" in order to try to provoke reactions that way.

Note that Urmas is a bit unusual for a troll in that he/she does give
some quite good answers sometimes, although that might be part of the
overall strategy.

I tend to find that trolls either get bored and go away or just look
increasingly ridiculous and laughable OR their behaviour shifts into
becoming more acceptable. It's this last group that are often of more
value to the community than most people because all their angst and
angry energy can be a very positive driving force when it's not being
so badly misdirected.

Reacting to the trolls bad behaviour makes it look like we have no
idea how to control our own doorstep. At the very worst, if they are
being paid by MS to stir up trouble, then by ignoring them and
continuing to give happy&friendly or professional&cool answers then
the mailing list looks much more like a good place for users to get
help from. If you have ever been to a nightclub that has "bouncers"
on the door or to a posh hotel with a concierge standing out the front
then you might have noticed how their never seems to be any fight
going on out the front. Even if the bouncer/concierge was absolutely,
unquestioningly right the venue/hotel loses it's good name and the
bouncer/concierge loses their job and finds it difficult to get
similar work elsewhere in town. The trick is to make it obvious that
trying to start a fight wont get anywhere.

So, don't worry. We do have experience of dealing with this situation
and it is being dealt with even though it's not obvious on the
surface.
Regards from
Tom :slight_smile:

Hi Jim

James E. Lang wrote

Pardon my ignorance. I don't even know what Nabble or gmane are. If they
are "social media" then this old geezer doesn't participate and that's by
choice. Mailing lists, product specific forums, and even (to a limited
extent) IRC channels are fine.

Nabble is an interface for a mailing list. For instance you can see this
discussion in a tree view here
http://nabble.documentfoundation.org/Engaging-users-initial-results-of-the-survey-tp4082398.html

James E. Lang wrote

What's the point of "Ask LibreOffice" if each question is seen, say,
3 times in a one week period? Most questions are unanswered.
Similarly with LibreOffice forum. A user might not bother to sign up
to such a method that is hardly ever used by relevant users; and if
it goes through it anyway and no answer is provided (as it is the
case with most "Ask LibreOffice" topics), it would probably generate
a rejection response towards LibreOffice.

What is "Ask LibreOffice?" How is it accessed?

Ask LibreOffice refers to http://ask.libreoffice.org
You can login to ask or answer just by e.g. using your Gmail account, no
need for a new registration.

Regarding the comments you are quoting, I can only say that people answering
questions are volunteers (such as myself) and they can (and should) only
answer when they know an answer. If someone asks e.g. how do I use LO to
print sideways in a Mac, personally I have no idea because I never used a
Mac, so I leave the question unanswered...

Secondly since collaborating is volunteer, I don't feel obliged to step by
every day or scrolling through all the unanswered questions since my last
visit... I hope that there are enough collaborators so that at any moment a
question could be answered by someone.

My 2 (voluntary) cents.

FYI I'm not a member or affiliated with TDF or linked to this project in any
way. All my contributions are voluntary and free :slight_smile:

Hi Ken

snowshed wrote

But, basic things should work right. In my list I have here, but have
not tested in 4.x.x

You are right. It is sometimes amazing how simple things don't work or stop
working (regressions)...
But overall they tend to get better. I sincerely recommend that you give
version 4.1.3 a go.
If it still doesn't work for you, then try again when 4.1.6 is out. Or
4.2.3... Never try x.0 versions :wink:

snowshed wrote

No, but the quality mechanic says "I'm booked up for a month, if you can
wait that long." Then he'll fix it, he won't ignore you. That gives
you the option of waiting or finding a new mechanic.

Very interesting point. That is indeed one of the flaws in the volunteer
method used in this project...
There are other volunteer models that would probably work better.

snowshed wrote

Contributing by reporting issues is all I can do for any software, but
eventually you hope to see things you find fixed. You find theses
problems because they are features of a software package that you use.
After some period of time, when things don't get fixed, you begin to
think, "Why bother?"

That is another good point. If people bother to report a bug and it is not
fixed for years (no matter how small or insignificant) then they are very
unlikely to bother to report another one...

In the end, the main enemies of this project are: the sheer size of the
project and the volunteer "do only what you feel like" model.

Best regards,
Pedro

Hi Urmas,

Then why does the LO site not feature a warning on a front page: "User,
you are ...

  So - trying to condense something positive out of this; it sounds like
we should have something (perhaps in the BSA) that explains to
newcomers, that bug filing is appreciated - but having an expectation
that their bug is fixed and/or prioritized over others, or a feeling of
entitlement that because you filed it - we should fix it [ given the
vast disparity in time-to-file (1 minute) vs. time-to-fix (5 man days
avg) ].

  I assume it would help smooth the flow of people feeling unreasonably
irritated if we had some more helpful text like that ? if so, it'd be
great to have a draft we can hand on to Rob / Cloph to put somewhere
there in some "About bug filing" page, perhaps after filing.

One cannot build straw airplanes for 25 years instead of listening to
feedback ...

  I listen to feedback a lot =)

  ATB,

    Michael.

Hi Ken,

After using LO for awhile, I found and filed a couple of bugs/issues. I
wanted to contribute in the area of reporting issues, but I don't have
the knowledge to fix them. I didn't expect those problems to go to the
head of the line. But I *did* expect them to be put in the queue and
eventually fixed.

  The problem of course is that there is no queue of bugs-to-fix. We try
to prioritize issues, so that we can see those that are seriously
debilitating and then try to fix those on a best-effort basis.

What I didn't like was being told my issues were not important. BS!
It's important to me.

  This is the interesting piece to me. Can you expand on your experience
there ? clearly all bugs are important to someone - but not all are
'Critical' or whatever from a prioritization perspective. Nevertheless,
perhaps the naming of those prioritization is needlessly offensive.
Potentially with our new bugzilla we could use P1 -> P6 or whatever -
making it clear that this is a spectrum.

Let's say you have a car, and every 4th time you go to use it, it won't
start. You take it to your mechanic, and each time you do, he tells you
"it's not important, he's got bigger problems to solve". Are you going
to continue to take it to that mechanic, or are you going to find a
different mechanic?

  I'm really not sure that there are any mechanics out there that do work
for free; I've not met one. Of course - if you want to pay for a bug to
be fixed, our level-3 bug queue has only a handful of open-bugs, and
they turn over on a weekly basis. But I strongly suspect you don't want
to pay.

  So - perhaps a more apt analogy is taking your car to a local friendly
volunteer / free mechanic down the road who helps people out of the
goodness of their heart - and berating them for not spending a week
investigating and fixing the squeak in your suspension -now- because
he's been working trying to get other people's car's to start at all :wink:

  Anyhow - there is no desire to offend people through the prioritization
flow; that is a really critically useful function of QA though - so
ideas on how we can improve that appreciated.

  All the best,

    Michael.

Servus Micheal,

true, what you are saying, but one little thing:

[ given the vast disparity in time-to-file (1 minute) vs.
time-to-fix (5 man days avg) ]

...in correlation with 1 developer investing 5 days to fix it vs.
maybe hundreds, thousands or even hundreds of thousands of users
wasting for example 5 minutes per day due to the bug.

:smiley:

Cheers,
Stefan

Stefan Weigel wrote

[ given the vast disparity in time-to-file (1 minute) vs.
time-to-fix (5 man days avg) ]

...in correlation with 1 developer investing 5 days to fix it vs.
maybe hundreds, thousands or even hundreds of thousands of users
wasting for example 5 minutes per day due to the bug.

+1

Plus the hundreds of thousands that give up on LO *because* of the bug.

The logic that "a bug is not important because not many people report it"
has a big flaw: most people give up without bothering to report... (and in
the case of Bugzilla, reporting requires quite a lot of effort)

My suggestion? Change volunteer model.

In other volunteer based organizations (humanistic, animal protection, etc),
people agree to do work on tasks attributed to them even if that is not what
they feel like doing. The "do whatever you feel like" model doesn't work.

My 2 cents.

I think there's a dangerous perception here: The perception that the LO
developers work on nothing except what they want to work on.

I'm pretty sure that that is very false. They work on LO because they
want to work on LO, and they probably choose what to work on based on
at least the following:

a) How many people report a specific bug
b) How serious that bug is to people's productivity
c) How in line that bug is with current roadmaps
d) Regression bugs probably have higher priority
e) What parts of the system they know the best
f) How much time they have at the moment, and how big the bug is
g) How much fun they think it will be (or more likely, which bug will
be the least annoying to fix)

The logic that "a bug is not important because not many people report
it" has a big flaw: most people give up without bothering to
report... (and in the case of Bugzilla, reporting requires quite a
lot of effort)

This logic may not be true in an absolute sense, but consider that it's
not normally a question of how important the bug is on its own merits,
but how important the bug is compared to other bugs. If 50 people have
reported bug A and only 2 people have reported bug B, while bug B may
still be important, and there may be another 20 people who haven't
reported it, it is not as important as bug A, and there may be another
20 people who haven't reported bug A as well.

In other volunteer based organizations (humanistic, animal
protection, etc), people agree to do work on tasks attributed to them
even if that is not what they feel like doing. The "do whatever you
feel like" model doesn't work.

And I don't think anybody is suggesting that that model is the
predominant one here.

I have pointed out in the past that you cannot expect a developer to
work on your bug, because there is nothing forcing him to work on
anything but what he wants to, but that doesn't mean that is all he
does. It means you can't *force* him to work on what *you* want him to
work on. I'm sure the developers *do* give careful consideration to
what they work on, it just might not be what you feel they should work
on, but they've got a bigger picture than you.

Remember, we do have to keep the developers happy to some extent,
otherwise they leave. This is true even if they are getting paid (I've
left more than one company which was paying me, because I was unhappy
with some other aspect of the situation), and especially true if they
are not getting paid. Also, just because they don't prioritise the bugs
you think are important, doesn't mean they are cherry-picking just the
bugs they like, it probably just means they had other, more important
things to deal with.

Just something to keep in mind.

Paul

Paul-6 wrote

I think there's a dangerous perception here: The perception that the LO
developers work on nothing except what they want to work on.

I didn't mean to say that.
I'm aware that some developers work on whatever is needed and fix the most
urgent bugs/regressions.
But out of 300 developers, there must be people who can fix the "boring"
bugs and the "not important" bugs... Of course you would have to ask these
developers to start with bug #1 and fix it before moving to #2

Michael Meeks once wrote "Developers don't like to be told what to do". I'm
sure they don't. But if nobody does then there is no solution for bugs that
keep lingering...

Paul-6 wrote

I have pointed out in the past that you cannot expect a developer to
work on your bug, because there is nothing forcing him to work on
anything but what he wants to, but that doesn't mean that is all he
does. It means you can't *force* him to work on what *you* want him to
work on. I'm sure the developers *do* give careful consideration to
what they work on, it just might not be what you feel they should work
on, but they've got a bigger picture than you.

First, it's not *my* bug. The bug is the software. The software is not mine.
Second, many times I already have a solution for the problem. I only report
it so that the bug is fixed for the benefit of the community. I even report
bugs that don't affect me at all.
Third, "there is nothing forcing him to work on anything but what he wants
to" is exactly the problem IMO.

Paul-6 wrote

Remember, we do have to keep the developers happy to some extent,
otherwise they leave.

Yes, so do other people. But they are not so important, right?
If you can't tell developers what to do, some bugs will always be there
because they are boring to fix or because they are "not important".

I'm suggesting that a compromise based volunteer model is applied to all,
not just to developers. Then you might start to see a change and a real
community :wink:

Paul,

Just completing Pedro's answers inline...

Paul-6 wrote
> I think there's a dangerous perception here: The perception that
> the LO developers work on nothing except what they want to work on.

I didn't mean to say that.
I'm aware that some developers work on whatever is needed and fix the
most urgent bugs/regressions.
But out of 300 developers, there must be people who can fix the
"boring" bugs and the "not important" bugs... Of course you would
have to ask these developers to start with bug #1 and fix it before
moving to #2

Michael Meeks once wrote "Developers don't like to be told what to
do". I'm sure they don't. But if nobody does then there is no
solution for bugs that keep lingering...

and nobody says the system is perfect. :wink:
But to come back to Paul's objection, yes, developers work on what they
want to work on. Their motivation can be anything from a salary to some
dream they want or yet another thing that keeps them awake at night.
Somewhere in between I'm sure there's a reasonable guy . But "whatever
is needed" is prone to a wide range of interpretation.

Let me give you an example. While "your" bug (good point Pedro, by the
way) wasn't being fixed, some guy called Caolan McNamara, who wrote the
code of the word processing module back in the days of OpenOffice.org
took on the daunting task of rewriting the entire graphical system of
LibreOffice. And mind you, we're talking about over 6Million lines of
code for a suite like LibreOffice. Was it necessary? Hell yes. Was it a
high priority? Absolutely. Did he have the time to focus on the bug
you're mentioning? No.

But to him, this objective was of the highest importance and it was
*sorely* needed. I'm not saying the bug you reported wasn't important.
I'm saying that while you may be complaining, others are cheering.
Other bugs get fixed. See my point?

Paul-6 wrote
> I have pointed out in the past that you cannot expect a developer to
> work on your bug, because there is nothing forcing him to work on
> anything but what he wants to, but that doesn't mean that is all he
> does. It means you can't *force* him to work on what *you* want him
> to work on. I'm sure the developers *do* give careful consideration
> to what they work on, it just might not be what you feel they
> should work on, but they've got a bigger picture than you.

First, it's not *my* bug. The bug is the software. The software is
not mine. Second, many times I already have a solution for the
problem. I only report it so that the bug is fixed for the benefit of
the community. I even report bugs that don't affect me at all.

+1

Third, "there is nothing forcing him to work on anything but what he
wants to" is exactly the problem IMO.

And yet that's how most of the FOSS projects work. But then again, no
system is perfect.

Paul-6 wrote
> Remember, we do have to keep the developers happy to some extent,
> otherwise they leave.

Yes, so do other people. But they are not so important, right?
If you can't tell developers what to do, some bugs will always be
there because they are boring to fix or because they are "not
important".

I'm suggesting that a compromise based volunteer model is applied to
all, not just to developers. Then you might start to see a change and
a real community :wink:

Motivation is a hard thing to assess. Rather than reaching a
compromise in abstracto, I'd say that the compromise is found through
social engineering and everyone's motivation. Let's say that you are
reporting bugs on a regular basis. Some of these bugs are particularly
hairy ones, and it catches developers' attention. It's likely that
after two or three bug reports of that kind, developers, at least some
of them, might be paying attention.

Yet another way to look at it is that the number of volunteers
reporting the bug or making it an issue to tackle over the various
collaborative and communication channels we have around the project.
Basically, this is an invitation to contribute and get recognized. By
contributing, you get recognized, you get bonus points, and your
credibility grows. Mind you, it works the same way for developers. And
because of that, the fact that you, a known contributor points out that
there's a leftover bugfix that may even already have a solution has
more chances to get fixed.

Hope this helps,

CC'ing the website list because it is about the website, but I'm not subscribed, so please CC me on replies if you really want to discuss this complaint.

The problem of course is that there is no queue of bugs-to-fix. We try
to prioritize issues, so that we can see those that are seriously
debilitating and then try to fix those on a best-effort basis.

This prompted me to go file a bug (Feature Request) for something I've been meaning to file for some time now, then I couldn't remember if I'd already done it before, so I wanted to check and see...

Well... I must say, I am horrified by what I found.

The 'new' website is extraordinarily difficult to navigate if you want to do anything other than download the latest version.

My simple goal was to log into the Bug system, check 'My Bugs' and see if I'd reported this yet, and if not, report it...

1. after going to www.libreoffice.org, why do I have to first find and click on 'Main Website' to get to ... the main website? Why isn't www.libreoffice.org the main website?

2. After going to the main website, I clicked on 'Get Help', and then clicked on 'Bug'...

The only option here is to report a bug.

What if I don't want to report a bug, but only want to search for bugs?

Even after I log in, the only next step available is to continue reporting a bug I don't want/need to report!?

This is HORRIBLY BROKEN.

How do I just log into the bug reporting system and search it? Anyone?

1. The survey seems to be a Self seLected Opinion Poll (SLOP), so I'm
taking it with a grain of salt the size of the Sears Tower. There's no
margin of error included in the poll either and based upon the sample as
being from the mailing lists (where people are generally active anyway)
I'd say it's fairly skewed.
2. The conclusions are generic, wishy-washy and are based on guesses
and assumptions with no hard underlying data. How much in contributions
has LibreOffice raised? Does that fit in with what the survey said?
Where is the Quality Assurance in the web site? And why would an end
user be interested in that?
3. User support and quality assurance do not require too much time or
technical knowledge. Remind me not to hire you for either of those
tasks in my business. Those are things that professional companies hire
entire other companies to do.

I'd give this project an F in a freshman statistics class, and would not
base any strategy off of this "survey"

Thanks John, I'll take it from your comment that
1) you are either a survey professional and you only wait for the next
survey to contribute your time designing it

and/or

2) you will contribute the costs of hiring a market research firm the
next time we need a survey.

Allegedly, I and none of the other people who designed the survey are
professional survey designers.

Best,

For me I tried to navigate bugzilla to report continuing bugs where 2 formatting issues occur regularly:

1. Some cells in Calc have mixed size formatting in them or mixed fonts (parts are in Geneva 10 and parts in Geneva 12) I can go in by hand to each cell, select the piece that is the wrong size (12 point) change the font size and save the spreadsheet. At some point in the future (and no it is not consistently the next time) I re-open the file and those corrections are gone, the fonts are back to mixed size. It happens all the time but I can' get a small spreadsheet to demonstrate the problem. The one I am using that has the problem contains 15 years worth of sheep data and records and has a number of separate sheets in it. I've tried to copy out smaller sections and while I can get the small ones to have the mixed font issue when I change the font in them it seems to stay changed.

2. Alternatively I click on the upper left blank cell above the row and column number lines to select the entire spreadsheet. Change the font and or size . Cells with a single font and size in the text change as expected but any cells with mixed sizes do not.

Unfortunately bugzilla is worse than difficult to use and now it's even harder to find out how to do that.

Yet another way to look at it is that the number of volunteers
reporting the bug or making it an issue to tackle over the various
collaborative and communication channels we have around the project.

Eugenie (Oogie) McGuire
Desert Weyr http://www.desertweyr.com/
Paonia, CO USA

You made a survey without a survey statistician on your team. Did you
send out a request for such a person on the mailing lists to advise you
before you put together the survey? Did you have a clear and concise
question that you wanted to answer before you developed the survey
questions? Did you run the questions by an aforementioned professional
in the staff and check for confirmation bias?
I am not a professional statistician, and that's just what I spotted. I
have covered surveys as a journalist in my previous career, though. And
I also am a veteran of setting up business projects. A survey
statistician would have a lot more to say I am sure. And we're not even
starting on the analysis. In fact, I'd throw out the analysis and the
results and start anew. First off, define "users" (end users,
evangelists, business users?) and state the overall purpose of your
survey in a single question.
I regret some of the tone of the previous e-mail (first e-mail prior to
coffee), but there's nothing here to work with. You've got 300
self-selected users with at least two major questions in one survey that
you did not break out by region, sex, profession. You want results, you
need good data underneath.

You made a survey without a survey statistician on your team. Did you
send out a request for such a person on the mailing lists to advise
you before you put together the survey? Did you have a clear and
concise question that you wanted to answer before you developed the
survey questions? Did you run the questions by an aforementioned
professional in the staff and check for confirmation bias?

No. And apparently you have little awareness of how our project works.
But you make a couple of valid points.

I am not a professional statistician, and that's just what I
spotted. I have covered surveys as a journalist in my previous
career, though. And I also am a veteran of setting up business
projects. A survey statistician would have a lot more to say I am
sure. And we're not even starting on the analysis. In fact, I'd
throw out the analysis and the results and start anew. First off,
define "users" (end users, evangelists, business users?) and state
the overall purpose of your survey in a single question.
I regret some of the tone of the previous e-mail (first e-mail prior
to coffee), but there's nothing here to work with. You've got 300
self-selected users with at least two major questions in one survey
that you did not break out by region, sex, profession. You want
results, you need good data underneath.

You know, aside being rather inaccurate, you're welcome to run another
survey. We're always looking for more volunteers. And I'm glad to help
you on this, so please go ahead.

best,

Charles.

Okay, I point out problems and you're response is "you don't like it
you can run out your own survey" and then say I'm inaccurate without
stating why I'm inaccurate with a solicitation for donations in the
previous e-mail. Do you see the major issue here? Flies, honey, vinegar.

I don't know how your project works, but if you're not doing the proper
work beforehand I don't know how it can work. Ask anybody who's run any
successful project. Heck, even the leaders of failed projects can tell
you. They probably have more information.
First, you define your goals. Next you gather and prepare your
resources. You do a test run, maybe more than one and hope you have
enough time. You have people with specific knowledge critique and make
adjustments. Finally you run the project, and afterwards you analyze
and make improvements for the next time. Those principals apply whether
you're running a for profit project or a non profit.
And that would be the bare bones work if I was running a local project.
You're going global, which involves understanding cultural differences
as well. That is not the type of thing I would do with an ad hoc team
with nobody who has any experience in what I was doing in the first place.
Like I said, define the questions, gather the mailing list. And if you
don't have access to anybody with experience in statistics, don't launch
until you do. A badly done survey is worse than none at all.