You’ve mentioned here somewhere about trust expiring automatically after a while. Is something like that implemented already? What do you think about trust expiring after inactivity time? When a user has been inactive for more than 30 or 60 days…
In our group we have quite a lot of users who would really not be suited to have editing permissions, but they have it because they signed up before the feature was implemented. That’s one thing. But even for future users who acquire trust and editing permissions it seems reasonable that if they are not active at all for a longer period of time they’d lose editing permissions.
No, right now trust stays forever and as soon as a user has editing permission, they are permanent.
It sounds reasonable - after this time, users are unlikely to be familiar with day-to-day operations and there’s no good reason why they should edit the group.
That sounds interesting, are they not suited because they don’t need it or because they might change something out of accident? Did people abuse their editing permissions already?
We talked about these topics with Foodsharing Warszawa too and the concept of social control seems to be quite powerful - if a respected person (e.g. somebody speaking “for the group”) tells people they are not allowed to do something, they usually don’t do it.
Of course, there’s always a chance that somebody acts against it and for this reason I want to implement something that groups can remove users. I think this is especially important because members who rightfully have some power (because they manage a store or take care of group administration) have more ways to abuse the power. Hence a need for checks & balances, and user removal.
Both because they don’t need and because they might change out of accident. But no, so far we haven’t had any problem with abuse. I guess the only very small annoying incident that I’ve observed is an inexperienced user who never did a pickup accept an application of a new user who completely ignored one of the important questions asked in the application, about reading and accepting our guidelines. Btw, it’s the case that only trusted users can accept or decline an application, right?
This feature has not been needed so far in our group but will certainly be appreciated if/when the time comes.
Yes, that’s the case, accepting an applicant needs editing permissions.
I thought about introducing a new role that is about handling applications, because usually every group has their own procedure how they would like to accept applicants and it needs some knowledge, like you describe. But the priority of adding this role is quite low compared to other tasks.
It is the same in our group. We did not have any abuse as far as i know either, but i guess it is only because most people really don’t know that they even CAN change anything. Most people don’t go beyond the first page and maybe the feedback option. I think everybody justs assumes that i am the admin of the group.
Most people don’t even get what the trust karrots are about or even see that they are there.
I like that idea as well, as we have quite some people who are inactive (almost half of our members). This will hopefully change now, as we also have an application feature and will soon probably have more stores. Otherwise it doesn’t really matter so much, because our inactive members wouldn’t probably just come and change stuff.
Would be cool to add something that makes sure people who deserve trust carrots also get them.
If we would require that trust is needed to keep editing permissions, then some people might get stripped of their rights even if they need them. Could implement a warning beforehand though…
It might be even worse if trust expires or if trust requirements change dynamically (e.g. needed trust is relative to group size and the group grows)
I think problems with people bringing chaos into group(-settings) can be resolved when we released the user removal (“conflict resolution”) feature - expect another post today or tomorrow!
In my opinion too many. It’s currently super easy to get editing rights . Current settings give editing rights to a user who is trusted by three other users. Do you think it’s restrictive enough? In small groups of 20 people, maybe it is. But in bigger groups this is hardly any limitation. In our group in Warsaw we have about 80 users. A lot of them have joined because they were recommended by other members so they already have someone who will trust them. So it’s only a matter of finding two more users who will trust them to get editing rights. As a result new users who have just joined the group may be given editing rights and can also take part in conflict resolutions on users they may now even know.
My proposal is to restrict editing rights more :
Number of people who needs to trust a user to give him/her editing rights should be higher. 5 is a minimum in my opinion and this number should be agreed by each group separately. This limitation ensures new users who get editing rights are verified by a bigger number of other users.
To get editing rights, a user should be trusted by at least X (e.g. 1) user with editing rights. This limitation ensures new users are not just members of small cliques who trust each other but are also trusted by someone who is already trusted.
New users shouldn’t get editing rights immediately after getting enough trusts. There should be a waiting period, e.g. at least a month after signing up, which has to past before a new user can get editing rights. This limitation ensures new users will have enough time to know the group more and become more able to make reliable decisions.
Thanks for your thoughts on this, @mzpawlowski!
Did you have bad experiences with users abusing their editing rights or do you only assume that it might happen? I’d like to point out that we never intended the editing rights to turn into admin roles which only a few people have. The danger a group faces when needing approval of someone who already has editing rights is that elites form and the once open group actually closes down. I think it would be a shame if that happened.
I do agree that the number of trusts needed could adapt more to the size of the group. I’m not completely sure how it works right now, but I think it already adapts a little. Can you maybe shed light on this @tiltec? (I might write a manual page for this feature as well, if I get all the info… )
When it comes to the waiting time, I also think that this could make sense, but am unsure how long it should be. Are there more opinions on this?
No bad experience so far with editing rights being abused. I’ve been just pointing out the possibilities to consider
As long as I might understand your idea to have a group where everyone is equal, there are no admin roles or so called ‘elites’, I don’t think this is very safe. And there are probably not many examples of the systems which work this way. When a group is getting bigger and bigger, the small probability that a user has bad intentions can eventually materialize. With editing rights and bad intentions we could run into serious problems.
Imagine the following situation:
Michal who has editing rights in Karrot tends to abuse some rules and someone has finally decided to open a conflict resolution case against him. Voting takes 7 days during which Michal has signed up to Karrot using 3 new e-mail addresses and accepted his own applications form old e-mail which has editing rights. In total Michal has 4 accounts now. Each of them can be trusted by 3 other accounts which belong to him. As a result, all 4 accounts will get editing rights. Even if everyone in the voting has decided to get rid of Michal (his first account), he will still have 3 more account with editing rights. New cases can be opened against them but as long as Michal is not restricted by anything or anyone, he can continue this procedure forever.
This may sound unrealistic but it’s theoretically possible. I believe we have been able to avoid any situations like this in Warsaw because of two reasons:
We have a recruitment process when we can evaluate a candidate before he/she joins a group on Karrot.
Not many Karrot users are aware of the rights they have.
But there is no guarantee it won’t happen. That’s why I think we need more user levels and / or user roles. For example, if there was a special user role who accepts new applications, the situation with Michal couldn’t happen (if Michal didn’t have rights to accept new applications).
I’m aware it might be against your idea behind Karrot development but I just want to highlight that it also poses serious risks.
I am totally aware of the risk you pointed out.
To me the more important point is the following: If somebody really wants to fuck with the group, they can do so anyway. These are problems that need to be solved by humans, not by software. Human can do so much better, because they can evaluate case by case and don’t need to have one system that applies to every single case and does all of them justice. You seem to know that as well, that’s why you have a recruitment process outside of Karrot - and that is exactly what we hope all groups do.
When I read your example I think the thing to change would rather be that only one approval is needed for an application to be accepted. That’s another thing that was just a simple solution for the first iteration of a complex feature. In the future it could be that there’s a team of trusted and interested users who take care of applications and then only those can accept or decline, or that we have a similar case of adapting numbers like in the trust feature - although that would need some thinking, because negative and positive voices would need to be balanced out against each other somehow…
Anyway, there’s many things we can change, but I seriously doubt that our approach of “Don’t let potential malicious use cases guide your designs decisions!” will be the first one…
I agree 100%. We solve our problems outside of Karrot. But so far we haven’t had a real possibility to remove someone from the software although we needed once to remove someone from the community. This person still has Karrot account and could come back and mess up in the system if he knew it was possible. I’m trying to avoid a situation when someone will want to misuse his/her powers in Karrot. That’s why I’m raising all these theoretical cases
Don’t be offended but for me the above sentences contradict each other. A team of trusted users who take care of applications is not different than an ‘elite’ who approves editing rights. Or I don’t fully get your point here? Anyway, this is exactly how it works in our community in Warsaw. We have a group of a few people who take care of application and recruitment process and only this group, not the whole community, accepts or declines newcomers. This is in real life because in the software everyone with editing rights can accept applications. My point is that there are always groups within communities who have special rights but they also come with greater responsibility. Such groups are not ‘elites’ though if anyone can join them.
No you’re right, I didn’t explore the thought well enough to make sense. In this scenario the team of application managers (or whatever we wanna call them) would need to be elected by the group using the exact voting mechanism we introduced for the conflict resolution. Like @tiltec already outlined above, the voting could be used for many cases in which legitimization by the group becomes necessary. It’s not fully thought through yet, but the general idea is to combine the best of both worlds: The clear distribution of tasks that leads to users not being overwhelmed by possible responsibilities (and which groups normally have already in place anyways) and the dynamically adaptive and permeable structure of an open group that leads to everybody having the opportunity to get further involved if they show commitment and interest in a special aspect.
This is another of the core ideas we try to follow with Karrot: Not to push a certain artificial structure onto groups, but to represent the structure the groups already use in the physical world and to match its real rights and responsibilities also in the digital sphere. But that’s quite hard to do as it requires a lot of dynamic adaptability from the software, as well as constant legitimization of the group and at the same time we also don’t want to annoy the users with too many questions and options… So as you can see it’s a hard thing to balance, but we’ll continue to do our best!
Well that should be changed by now, so I hope the immediate danger can be banned…
Both seem reasonable to me. The first seems easy to change, but I would additionally change the logic how the trust threshold “grows” with group size. Otherwise smaller groups have too high requirements.
The second part could be a group setting, although I’m a bit hesitant with adding more settings as there’s a complexity explosion with every customization options. And currently the code doesn’t really deal with with changing thresholds, so it might lead to unexpected behavior.
That seems reasonable to me too, although it would need some exception for groups who are just getting started. Otherwise only the group founder would have editing permissions for the first weeks. I would add this only if there’s a strong reason for it.
I had a quick look at the statistics, in the last three months in Foodsharing Warszawa:
159 trust was given
10 editing permissions were granted
85 active members, of which 18 are newcomers (22 %)
The other big groups on karrot.world are Foodsharing i Östersund with 91 and Solikyl with 72 active members. I noticed they have a lot more newcomers (63 % and 57 %). They also have less activity (pickups+feedback+messages), so users spend less time on Karrot. I think there might be a connection.
Another interesting statistic would be “time from joining the group until gaining editing permission”, to have some guidance when adding a time threshold.
I’d really like to run a graph analysis, to identify how connected users are via trust. This should show if there are many separate “bubbles” in a group.
I have scanned this topic and it’s clear for me that currently there is no option to distrust a user. Is this correct? Because someone in Warsaw group told me that I removed a trust I gave her earlier. I can’t remember removing a trust, especially that I didn’t know it was possible at all, but it really seems that this person had a trust from me and later it wasn’t there. Supposedly, there have been a few other incidents like this. Could have it happened at some point that trusts got removed? Is it possible to get some data from the back-end to confirm or reject the hypothesis on trusts being removed?
Indeed, I already received one report that trust didn’t “stick”. I investigated a bit, but couldn’t find the problem.
You might be onto something there. Can you send me more details via private message? User names, ids and time frames are most useful!