* Add a limit to how many posts can get fetched as a result of a single request
* Add tests
* Always pass `request_id` when processing `Announce` activities
---------
Co-authored-by: nametoolong <nametoolong@users.noreply.github.com>
* Fix home tl contains post from who blocked me
* Add test
* Fix feed_manager's build_crutches
blocked_by was not includes status' owner
* Add test for status from I blocked
* Fix typo
Found via `codespell -q 3 -S ./yarn.lock,./CHANGELOG.md,./AUTHORS.md,./config/locales,./app/javascript/mastodon/locales -L ba,followings,keypair,medias,pattens,pixelx,rememberable,ro,te`
- Improve tests
- Fix possible crash when application of a reblogged post isn't set
- Fix discrepancies around favourited and reblogged attributes
- Fix discrepancies around pinned attribute
- Fix polls not being hydrated
* Change public accounts pages to mount the web UI
* Fix handling of remote usernames in routes
- When logged in, serve web app
- When logged out, redirect to permalink
- Fix `app-body` class not being set sometimes due to name conflict
* Fix missing `multiColumn` prop
* Fix failing test
* Use `discoverable` attribute to control indexing directives
* Fix `<ColumnLoading />` not using `multiColumn`
* Add `noindex` to accounts in REST API
* Change noindex directive to not be rendered by default before a route is mounted
* Add loading indicator for detailed status in web UI
* Fix missing indicator appearing while account is loading in web UI
* Move ActivityPub::FetchRemoteAccountService to ActivityPub::FetchRemoteActorService
ActivityPub::FetchRemoteAccountService is kept as a wrapper for when the actor is
specifically required to be an Account
* Refactor SignatureVerification to allow non-Account actors
* fixup! Move ActivityPub::FetchRemoteAccountService to ActivityPub::FetchRemoteActorService
* Refactor ActivityPub::FetchRemoteKeyService to potentially return non-Account actors
* Refactor inbound ActivityPub payload processing to accept non-Account actors
* Refactor inbound ActivityPub processing to accept activities relayed through non-Account
* Refactor how Account key URIs are built
* Refactor Request and drop unused key_id_format parameter
* Rename ActivityPub::Dereferencer `signature_account` to `signature_actor`
* Add model for custom filter keywords
* Use CustomFilterKeyword internally
Does not change the API
* Fix /filters/edit and /filters/new
* Add migration tests
* Remove whole_word column from custom_filters (covered by custom_filter_keywords)
* Redesign /filters
Instead of a list, present a card that displays more information and handles
multiple keywords per filter.
* Redesign /filters/new and /filters/edit to add and remove keywords
This adds a new gem dependency: cocoon, as well as a npm dependency:
cocoon-js-vanilla. Those are used to easily populate and remove form fields
from the user interface when manipulating multiple keyword filters at once.
* Add /api/v2/filters to edit filter with multiple keywords
Entities:
- `Filter`: `id`, `title`, `filter_action` (either `hide` or `warn`), `context`
`keywords`
- `FilterKeyword`: `id`, `keyword`, `whole_word`
API endpoits:
- `GET /api/v2/filters` to list filters (including keywords)
- `POST /api/v2/filters` to create a new filter
`keywords_attributes` can also be passed to create keywords in one request
- `GET /api/v2/filters/:id` to read a particular filter
- `PUT /api/v2/filters/:id` to update a new filter
`keywords_attributes` can also be passed to edit, delete or add keywords in
one request
- `DELETE /api/v2/filters/:id` to delete a particular filter
- `GET /api/v2/filters/:id/keywords` to list keywords for a filter
- `POST /api/v2/filters/:filter_id/keywords/:id` to add a new keyword to a
filter
- `GET /api/v2/filter_keywords/:id` to read a particular keyword
- `PUT /api/v2/filter_keywords/:id` to edit a particular keyword
- `DELETE /api/v2/filter_keywords/:id` to delete a particular keyword
* Change from `irreversible` boolean to `action` enum
* Remove irrelevent `irreversible_must_be_within_context` check
* Fix /filters/new and /filters/edit with update for filter_action
* Fix Rubocop/Codeclimate complaining about task names
* Refactor FeedManager#phrase_filtered?
This moves regexp building and filter caching to the `CustomFilter` class.
This does not change the functional behavior yet, but this changes how the
cache is built, doing per-custom_filter regexps so that filters can be matched
independently, while still offering caching.
* Perform server-side filtering and output result in REST API
* Fix numerous filters_changed events being sent when editing multiple keywords at once
* Add some tests
* Use the new API in the WebUI
- use client-side logic for filters we have fetched rules for.
This is so that filter changes can be retroactively applied without
reloading the UI.
- use server-side logic for filters we haven't fetched rules for yet
(e.g. network error, or initial timeline loading)
* Minor optimizations and refactoring
* Perform server-side filtering on the streaming server
* Change the wording of filter action labels
* Fix issues pointed out by linter
* Change design of “Show anyway” link in accordence to review comments
* Drop “irreversible” filtering behavior
* Move /api/v2/filter_keywords to /api/v1/filters/keywords
* Rename `filter_results` attribute to `filtered`
* Rename REST::LegacyFilterSerializer to REST::V1::FilterSerializer
* Fix systemChannelId value in streaming server
* Simplify code by removing client-side filtering code
The simplifcation comes at a cost though: filters aren't retroactively
applied anymore.
* Change RSS feeds
- Use date and time for titles instead of ellipsized text
- Use full content in body, even when there is a content warning
- Use media extensions
* Change feed icons and add width and height attributes to custom emojis
* Fix custom emoji animate on hover breaking
* Fix tests
* Fix PeerTube videos appearing with an erroneous “Edited at” marker
PeerTube videos have an `updated` field equal to `published`.
When processing an incoming activity that has the same value for `updated` and
`published`, assume this doesn't represent an actual edit.
* Please CodeClimate
* Remove obsolete RSS::Serializer test
Since #17828, RSS::Serializer no longer has specific code for deleted statuses,
but it is never called on deleted statuses anyway.
* Rename erroneously-named test files
* Fix failing test
* Fix test deprecation warnings
* Update CircleCI Ruby orb
1.4.0 has a bug that does not match all the test files due to incorrect
globbing
* Fix structured data parsing from links choking on bad data
- Fix og:url meta tag being prioritized over canonical link tag
- Fix structured data parsing choking on commented-out CDATA declarations
- Fix HTML entities in title, description, provider_name, author_name
- Change structured data parsing to attempt every JSON-LD script tag
* Remove unnecessary slash escapes from CDATA regex pattern
* Add support for editing for published statuses
* Fix references to stripped-out code
* Various fixes and improvements
* Further fixes and improvements
* Fix updates being potentially sent to unauthorized recipients
* Various fixes and improvements
* Fix wrong words in test
* Fix notifying accounts that were tagged but were not in the audience
* Fix mistake
* Add tests
* Ensure deleted statuses are marked as such
* Save some redis memory by not storing URIs in delete_upon_arrival values
* Avoid possible race condition when processing incoming Deletes
* Avoid potential duplicate Delete forwards
* Lower lock durations to reduce issues in case of hard crash of the Rails process
* Check for `lock.aquired?` and improve comment
* Refactor RedisLock usage in app/lib/activitypub
* Fix using incorrect or non-existent sender for relaying Deletes
* Prepare Mastodon for zeitwerk autoloader (Rails 6)
Add inflections and rename/move a few classes.
In particular, app/lib/exceptions.rb and app/lib/sanitize_config.rb
were manually loaded while still in autoload paths.
* Add inflection for Url → URL
* Update twitter-text from 1.14 to 3.1.0
* Disable emoji parsing
* Properly depend on twitter-text for url detection
* Fix some URLs being wrongly detected client-side
* Add test for server-side validation of non-autolinkable URLs
* Fix server-side status length counting
* Fix URI of repeat follow requests not being recorded
In case we receive a “repeat” or “duplicate” follow request, we automatically
fast-forward the accept with the latest received Activity `id`, but we don't
record it.
In general, a “repeat” or “duplicate” follow request may happen if for some
reason (e.g. inconsistent handling of Block or Undo Accept activities, an
instance being brought back up from the dead, etc.) the local instance thought
the remote actor were following them while the remote actor thought otherwise.
In those cases, the remote instance does not know about the older Follow
activity `id`, so keeping that record serves no purpose, but knowing the most
recent one is useful if the remote implementation at some point refers to it
by `id` without inlining it.
* Add tests
* Added .deepsource.toml
* Removed bad use of `alias`
* Fixed operand order in the binary expression
* Prefixed unused method arguments with an underscore
* Replaced the old OpenSSL algorithmic constants with the newer strings initializers.
* Removed unnecessary UTF-8 encoding comment
There are edge cases where requests to certain hosts timeout when
using the vanilla HTTP.rb gem, which the goldfinger gem uses. Now
that we no longer need to support OStatus servers, webfinger logic
is so simple that there is no point encapsulating it in a gem, so
we can just use our own Request class. With that, we benefit from
more robust timeout code and IPv4/IPv6 resolution.
Fix#14091
* Check for and record reblog info atomically
Instead of using ZREVRANK to determine whether a reblog is a new reblog or not,
use ZADD's NX option to perform the check/addition option atomically.
* Replace ZREVRANK call with ZSCORE key which is more efficient
* Make tests a bit stricter
* Fix off-by-one
* Add database support for list show-reply preferences
* Add backend support to read and update list-specific show_replies settings
* Add basic UI to set list replies setting
* Add specs for list replies policy
* Switch "cycling" reply policy link to a set of radio inputs
* Capitalize replies_policy strings
* Change radio button design to be consistent with that of the directory explorer
* Change content-type to be always computed from file data
Restore previous behavior, detecting the content-type isn't very
expensive, and some instances may serve files as application/octet-stream
regardless of their true type, making fetching media from them fail, while
it used to work pre-3.2.0.
* Add test
* Fix not handling Undo on some activity types when they aren't inlined
When receiving an Undo for a non-inlined activity, try looking it up in
database using the URI. The queries are ad-hoc because we don't have a global
index of object URIs, and not all activity types are stored in database with
an index on their URI.
Announces are just statuses, and have an index on URIs, so this check can
be done efficiently.
Accepts cannot be handled at all because we don't record their URI at any
point.
Follows don't have an index on URI, but they have an index on the issuing
account, which should make such queries largely manageable.
Likes don't have an index on URI, they have an index on the issuing account,
but the number of favs per account may be very high, so I decided not to
handle that.
Blocks don't have an index on URI, but they have an index on the issuing
account, which should make such queries largely manageable.
In all cases, if an Undo could not be handled properly, we call `delete_later!`
because that does not require us to know more than the URI of the undone
property.
* Add tests
* Make newer blocks overwrite older ones
Allows re-synchronizing block info by re-blocking and un-blocking again
when the original Undo Block has been lost.
* Improve RSS entries for statuses
- Render polls in both accounts and tags serializers
- Refactor RSS serializers
- Change title preview to include ellipsis when truncated
- Change title preview to show CW instead of toot text
- Add tests
* Remove title from OEmbed serialization
Twitter doesn't serialize title either, and tihs allows us to move the
title formatting code to the RSS serializers.
* Fix wrong grouping in Twitter valid_url regex
* Add support for xmpp URIs
Fixes#9776
The difficult part is autolinking, because Twitter-text's extractor does
some pretty ad-hoc stuff to find things that “look like” URLs, and XMPP
URIs do not really match the assumptions of that lib, so it doesn't sound
wise to try to shoehorn it into the existing regex.
This is why I used a specific regex (very close, although slightly more
permissive than the RFC), and a specific scan function (a simplified version
of the generalized one from Twitter).
* Remove leading “xmpp:” from auto-linked text
* Revert "Fix ignoring whole status because of one invalid hashtag (#11621)"
This reverts commit dff46b260b.
* Fix statuses being rejected because of invalid hashtag names
* Add spec for invalid hashtag names in statuses
* Add test for featured tags controller
* Change silenced accounts to require approval on follow
* Also require approval for follows by people explicitly muted by target accounts
* Do not auto-accept silenced or muted accounts when switching from locked to unlocked
* Add `follow_requests_count` to verify_credentials
* Show “Follow requests” menu item if needed even if account is locked
* Add tests
* Correctly reflect that follow requests weren't auto-accepted when local account is silenced
* Accept follow requests from user-muted accounts to avoid leaking mutes
* Play animated custom emoji on hover in status
* Play animated custom emoji on hover in display names
* Play animated custom emoji on hover in bios/bio fields
* Add support for animation on hover on public pages emojis too
* Fix tests
* Code style cleanup
* Add a spam check
* Use Nilsimsa to generate locality-sensitive hashes and compare using Levenshtein distance
* Add more tests
* Add exemption when the message is a reply to something that mentions the sender
* Use Nilsimsa Compare Value instead of Levenshtein distance
* Use MD5 for messages shorter than 10 characters
* Add message to automated report, do not add non-public statuses to
automated report, add trust level to accounts and make unsilencing
raise the trust level to prevent repeated spam checks on that account
* Expire spam check data after 3 months
* Add support for local statuses, reduce expiration to 1 week, always create a report
* Add content warnings to the spam check and exempt empty statuses
* Change Nilsimsa threshold to 95 and make sure removed statuses are removed from the spam check
* Add all matched statuses into automatic report
* Remove Salmon and PubSubHubbub endpoints
* Add error when trying to follow OStatus accounts
* Fix new accounts not being created in ResolveAccountService
* Add request pool to improve delivery performance
Fix#7909
* Ensure connection is closed when exception interrupts execution
* Remove Timeout#timeout from socket connection
* Fix infinite retrial loop on HTTP::ConnectionError
* Close sockets on failure, reduce idle time to 90 seconds
* Add MAX_REQUEST_POOL_SIZE option to limit concurrent connections to the same server
* Use a shared pool size, 512 by default, to stay below open file limit
* Add some tests
* Add more tests
* Reduce MAX_IDLE_TIME from 90 to 30 seconds, reap every 30 seconds
* Use a shared pool that returns preferred connection but re-purposes other ones when needed
* Fix wrong connection being returned on subsequent calls within the same thread
* Reduce mutex calls on flushes from 2 to 1 and add test for reaping
* Record account suspend/silence time and keep track of domain blocks
* Also unblock users who were suspended/silenced before dates were recorded
* Add tests
* Keep track of suspending date for users suspended through the CLI
* Show accurate number of accounts that would be affected by unsuspending an instance
* Change migration to set silenced_at and suspended_at
* Revert "Also unblock users who were suspended/silenced before dates were recorded"
This reverts commit a015c65d2d1e28c7b7cfab8b3f8cd5fb48b8b71c.
* Switch from using suspended and silenced to suspended_at and silenced_at
* Add post-deployment migration script to remove `suspended` and `silenced` columns
* Use Account#silence! and Account#suspend! instead of updating the underlying property
* Add silenced_at and suspended_at migration to post-migration
* Change account fabricator to translate suspended and silenced attributes
* Minor fixes
* Make unblocking domains always retroactive
* Prevent silenced local users from notifying remote users not following them
This is an attempt to extend the local restrictions of silenced users to the
federation.
* Add tests
* Add tests for making sure private status don't get sent over OStatus
* create account_identity_proofs table
* add endpoint for keybase to check local proofs
* add async task to update validity and liveness of proofs from keybase
* first pass keybase proof CRUD
* second pass keybase proof creation
* clean up proof list and add badges
* add avatar url to keybase api
* Always highlight the “Identity Proofs” navigation item when interacting with proofs.
* Update translations.
* Add profile URL.
* Reorder proofs.
* Add proofs to bio.
* Update settings/identity_proofs front-end.
* Use `link_to`.
* Only encode query params if they exist.
URLs without params had a trailing `?`.
* Only show live proofs.
* change valid to active in proof list and update liveness before displaying
* minor fixes
* add keybase config at well-known path
* extremely naive feature flagging off the identity proof UI
* fixes for rubocop
* make identity proofs page resilient to potential keybase issues
* normalize i18n
* tweaks for brakeman
* remove two unused translations
* cleanup and add more localizations
* make keybase_contacts an admin setting
* fix ExternalProofService my_domain
* use Addressable::URI in identity proofs
* use active model serializer for keybase proof config
* more cleanup of keybase proof config
* rename proof is_valid and is_live to proof_valid and proof_live
* cleanup
* assorted tweaks for more robust communication with keybase
* Clean up
* Small fixes
* Display verified identity identically to verified links
* Clean up unused CSS
* Add caching for Keybase avatar URLs
* Remove keybase_contacts setting
* Filter incoming Announce activities by relation to local activity
Reject if announcer is not followed by local accounts, and is not
from an enabled relay, and the object is not a local status
Follow-up to #10005
* Fix tests
* When self-boosting, embed original toot into Announce serialization
* Process unknown self-boosts from Announce object if it is more than an URI
* Add some self-boost specs
* Only serialize private toots in self-Announces
* Ensure blocked user unfollows blocker if Block/Undo Block are processed out of order
* Add specs for Block causing unfollow and for out-of-order Block + Undo
* Fix connect timeout not being enforced
The loop was catching the timeout exception that should stop execution, so the next IP would no longer be within a timed block, which led to requests taking much longer than 10 seconds.
* Use timeout on each IP attempt, but limit to 2 attempts
* Fix code style issue
* Do not break Request#perform if no block given
* Update method stub in spec for Request
* Move timeout inside the begin/rescue block
* Use Resolv::DNS with timeout of 1 to get IP addresses
* Update Request spec to stub Resolv::DNS instead of Addrinfo
* Fix Resolve::DNS stubs in Request spec
* Add silent column to mentions
* Save silent mentions in ActivityPub Create handler and optimize it
Move networking calls out of the database transaction
* Add "limited" visibility level masked as "private" in the API
Unlike DMs, limited statuses are pushed into home feeds. The access
control rules between direct and limited statuses is almost the same,
except for counter and conversation logic
* Ensure silent column is non-null, add spec
* Ensure filters don't check silent mentions for blocks/mutes
As those are "this person is also allowed to see" rather than "this
person is involved", therefore does not warrant filtering
* Clean up code
* Use Status#active_mentions to limit returned mentions
* Fix code style issues
* Use Status#active_mentions in Notification
And remove stream_entry eager-loading from Notification
updates some "context" and "it" lines to have clearer explanations
updates "context" lines to properly describe function input, and "it" lines to describe results
If the input text is blank after preparation (only mention, or
only URL, or empty as in a media post), then use nil as language,
since it's OK to show to everyone.
Otherwise, always fall back to the server's default locale
* Add keyword filtering
GET|POST /api/v1/filters
GET|PUT|DELETE /api/v1/filters/:id
- Irreversible filters can drop toots from home or notifications
- Other filters can hide toots through the client app
- Filters use a phrase valid in particular contexts, expiration
* Make sure expired filters don't get applied client-side
* Add missing API methods
* Remove "regex filter" from column settings
* Add tests
* Add test for FeedManager
* Add CustomFilter test
* Add UI for managing filters
* Add streaming API event to allow syncing filters
* Fix tests
* No need to re-require sidekiq plugins, they are required via Gemfile
* Add derailed_benchmarks tool, no need to require TTY gems in Gemfile
* Replace ruby-oembed with FetchOEmbedService
Reduce startup by 45382 allocated objects
* Remove preloaded JSON-LD in favour of caching HTTP responses
Reduce boot RAM by about 6 MiB
* Fix tests
* Fix test suite by stubbing out JSON-LD contexts
* Enable updating additional account information from user preferences via rest api
Resolves#6553
* Pacify rubocop
* Decoerce incoming settings in UserSettingsDecorator
* Create user preferences hash directly from incoming credentials instead of going through ActionController::Parameters
* Clean up user preferences update
* Use ActiveModel::Type::Boolean instead of manually checking stringified number equivalence
to_s method of HTTP::Response keeps blocking while it receives the whole
content, no matter how it is big. This means it may waste time to receive
unacceptably large files. It may also consume memory and disk in the
process. This solves the inefficency by checking response length while
receiving.
HTTP connections must be explicitly closed in many cases, and letting
perform method close connections makes its callers less redundant and
prevent them from forgetting to close connections.
* request: in the event of failure, try other IPs (#6761)
In the case where a name has multiple A/AAAA records, we should
try subsequent records instead of immediately failing when we have a
failure on the first IP address.
This significantly improves delivery success when there are network
connectivity problems affecting only IPv4 or IPv6.
* fix method call style
* request_spec: adjust test case to use Addrinfo
* request: Request/open: move private addr check to within begin/rescue
* request_spec: add case to test failover, fix exception check
* Double Addrinfo.foreach so that it correctly yields instances
A complemental change for precompute_feed_service_spec.rb also fixes its
random failure which is caused by the Snowlake randomization of the order
of an original status and its reblog.
* Fix actors accepting invalid URI schemes or different host between URI and URL
* Fix statuses accepting invalid URI scheme or different host to actor
* Adjust tests to new requirements
* Improve readability of mismatching_origin?/invalid_origin? methods
* Don't normalize URLs in toots
URL normalization is ill-defined and may cause certain links to break.
* Change specs since we are not normalizing user-provided URLs
* Sanitize classlist properly
* Actually properly sanitize every class after the first
* Improve Formatter spec to check for multiple classes and non-space whitespace
* Avoid sending explicit Undo->Announce when original deleted
* Do not forward a reply back to the server that sent it
* Deduplicate inboxes of rebloggers' followers for delete forwarding
* Adjust test
* Fix wrong class, bad SQL, wrong variable, outdated comment
* Allow hiding of reblogs from followed users
This adds a new entry to the account menu to allow users to hide
future reblogs from a user (and then if they've done that, to show
future reblogs instead).
This does not remove or add historical reblogs from/to the user's
timeline; it only affects new statuses.
The API for this operates by sending a "reblogs" key to the follow
endpoint. If this is sent when starting a new follow, it will be
respected from the beginning of the follow relationship (even if
the follow request must be approved by the followee). If this is
sent when a follow relationship already exists, it will simply
update the existing follow relationship. As with the notification
muting, this will now return an object ({reblogs: [true|false]}) or
false for each follow relationship when requesting relationship
information for an account. This should cause few issues due to an
object being truthy in many languages, but some modifications may
need to be made in pickier languages.
Database changes: adds a show_reblogs column (default true,
non-nullable) to the follows and follow_requests tables. Because
these are non-nullable, we use the existing MigrationHelpers to
perform this change without locking those tables, although the
tables are likely to be small anyway.
Tests included.
See also <https://github.com/glitch-soc/mastodon/pull/212>.
* Rubocop fixes
* Code review changes
* Test fixes
This patchset closes#648 and resolves#3271.
* Rubocop fix
* Revert reblogs defaulting in argument, fix tests
It turns out we needed this for the same reason we needed it in muting:
if nil gets passed in somehow (most usually by an API client not passing
any value), we need to detect and handle it.
We could specify a default in the parameter and then also catch nil, but
there's no great reason to duplicate the default value.
* Add structure for lists
* Add list timeline streaming API
* Add list APIs, bind list-account relation to follow relation
* Add API for adding/removing accounts from lists
* Add pagination to lists API
* Add pagination to list accounts API
* Adjust scopes for new APIs
- Creating and modifying lists merely requires "write" scope
- Fetching information about lists merely requires "read" scope
* Add test for wrong user context on list timeline
* Clean up tests
* Clean up reblog-tracking sets from FeedManager
Builds on #5419, with a few minor optimizations and cleanup of sets
after they are no longer needed.
* Update tests, fix multiply-reblogged case
Previously, we would have lost the fact that a given status was
reblogged if the displayed reblog of it was removed, now we don't.
Also added tests to make sure FeedManager#trim cleans up our reblog
tracking keys, fixed up FeedCleanupScheduler to use the right loop,
and fixed the test for it.
* Keep references to all reblogs of a status on home feed
When inserting reblog: Add to set of reblogs of this status on
the feed, if original status was present in the feed, add it to
that set as well.
When removing a reblog: Remove it from that set. Take random
remaining item from the set. If one exists, re-insert it into feed,
otherwise do not re-insert anything.
Fix#4210
* When original is removed, toss out reblog references
We've changed un-reblogging behavior when we implement Snowflake, to insert un-reblogged status at the position reblogging status existed.
However, our API expects home timeline is ordered by status ids, and max_id/since_id filters by zset score. Due to this, un-reblogged status appears as a last item of result set, and timeline expansion may skips many statuses.
So this reverts that change...reblogged status inserted at corresponding position to its id.
* Use non-serial IDs
This change makes a number of nontrivial tweaks to the data model in
Mastodon:
* All IDs are now 8 byte integers (rather than mixed 4- and 8-byte)
* IDs are now assigned as:
* Top 6 bytes: millisecond-resolution time from epoch
* Bottom 2 bytes: serial (within the millisecond) sequence number
* See /lib/tasks/db.rake's `define_timestamp_id` for details, but
note that the purpose of these changes is to make it difficult to
determine the number of objects in a table from the ID of any
object.
* The Redis sorted set used for the feed will have values used to look
up toots, rather than scores. This is almost always the same as the
existing behavior, except in the case of boosted toots. This change
was made because Redis stores scores as double-precision floats,
which cannot store the new ID format exactly. Note that this doesn't
cause problems with sorting/pagination, because ZREVRANGEBYSCORE
sorts lexicographically when scores are tied. (This will still cause
sorting issues when the ID gains a new significant digit, but that's
extraordinarily uncommon.)
Note a couple of tradeoffs have been made in this commit:
* lib/tasks/db.rake is used to enforce many/most column constraints,
because this commit seems likely to take a while to bring upstream.
Enforcing a post-migrate hook is an easier way to maintain the code
in the interim.
* Boosted toots will appear in the timeline as many times as they have
been boosted. This is a tradeoff due to the way the feed is saved in
Redis at the moment, but will be handled by a future commit.
This would effectively close Mastodon's #1059, as it is a
snowflake-like system of generating IDs. However, given how involved
the changes were simply within Mastodon, it may have unexpected
interactions with some clients, if they store IDs as doubles
(or as 4-byte integers). This was a problem that Twitter ran into with
their "snowflake" transition, particularly in JavaScript clients that
treated IDs as JS integers, rather than strings. It therefore would be
useful to test these changes at least in the web interface and popular
clients before pushing them to all users.
* Fix JavaScript interface with long IDs
Somewhat predictably, the JS interface handled IDs as numbers, which in
JS are IEEE double-precision floats. This loses some precision when
working with numbers as large as those generated by the new ID scheme,
so we instead handle them here as strings. This is relatively simple,
and doesn't appear to have caused any problems, but should definitely
be tested more thoroughly than the built-in tests. Several days of use
appear to support this working properly.
BREAKING CHANGE:
The major(!) change here is that IDs are now returned as strings by the
REST endpoints, rather than as integers. In practice, relatively few
changes were required to make the existing JS UI work with this change,
but it will likely hit API clients pretty hard: it's an entirely
different type to consume. (The one API client I tested, Tusky, handles
this with no problems, however.)
Twitter ran into this issue when introducing Snowflake IDs, and decided
to instead introduce an `id_str` field in JSON responses. I have opted
to *not* do that, and instead force all IDs to 64-bit integers
represented by strings in one go. (I believe Twitter exacerbated their
problem by rolling out the changes three times: once for statuses, once
for DMs, and once for user IDs, as well as by leaving an integer ID
value in JSON. As they said, "If you’re using the `id` field with JSON
in a Javascript-related language, there is a very high likelihood that
the integers will be silently munged by Javascript interpreters. In most
cases, this will result in behavior such as being unable to load or
delete a specific direct message, because the ID you're sending to the
API is different than the actual identifier associated with the
message." [1]) However, given that this is a significant change for API
users, alternatives or a transition time may be appropriate.
1: https://blog.twitter.com/developer/en_us/a/2011/direct-messages-going-snowflake-on-sep-30-2011.html
* Restructure feed pushes/unpushes
This was necessary because the previous behavior used Redis zset scores
to identify statuses, but those are IEEE double-precision floats, so we
can't actually use them to identify all 64-bit IDs. However, it leaves
the code in a much better state for refactoring reblog handling /
coalescing.
Feed-management code has been consolidated in FeedManager, including:
* BatchedRemoveStatusService no longer directly manipulates feed zsets
* RemoveStatusService no longer directly manipulates feed zsets
* PrecomputeFeedService has moved its logic to FeedManager#populate_feed
(PrecomputeFeedService largely made lots of calls to FeedManager, but
didn't follow the normal adding-to-feed process.)
This has the effect of unifying all of the feed push/unpush logic in
FeedManager, making it much more tractable to update it in the future.
Due to some additional checks that must be made during, for example,
batch status removals, some Redis pipelining has been removed. It does
not appear that this should cause significantly increased load, but if
necessary, some optimizations are possible in batch cases. These were
omitted in the pursuit of simplicity, but a batch_push and batch_unpush
would be possible in the future.
Tests were added to verify that pushes happen under expected conditions,
and to verify reblog behavior (both on pushing and unpushing). In the
case of unpushing, this includes testing behavior that currently leads
to confusion such as Mastodon's #2817, but this codifies that the
behavior is currently expected.
* Rubocop fixes
I could swear I made these changes already, but I must have lost them
somewhere along the line.
* Address review comments
This addresses the first two comments from review of this feature:
https://github.com/tootsuite/mastodon/pull/4801#discussion_r139336735https://github.com/tootsuite/mastodon/pull/4801#discussion_r139336931
This adds an optional argument to FeedManager#key, the subtype of feed
key to generate. It also tests to ensure that FeedManager's settings are
such that reblogs won't be tracked forever.
* Hardcode IdToBigints migration columns
This addresses a comment during review:
https://github.com/tootsuite/mastodon/pull/4801#discussion_r139337452
This means we'll need to make sure that all _id columns going forward
are bigints, but that should happen automatically in most cases.
* Additional fixes for stringified IDs in JSON
These should be the last two. These were identified using eslint to try
to identify any plain casts to JavaScript numbers. (Some such casts are
legitimate, but these were not.)
Adding the following to .eslintrc.yml will identify casts to numbers:
~~~
no-restricted-syntax:
- warn
- selector: UnaryExpression[operator='+'] > :not(Literal)
message: Avoid the use of unary +
- selector: CallExpression[callee.name='Number']
message: Casting with Number() may coerce string IDs to numbers
~~~
The remaining three casts appear legitimate: two casts to array indices,
one in a server to turn an environment variable into a number.
* Only implement timestamp IDs for Status IDs
Per discussion in #4801, this is only being merged in for Status IDs at
this point. We do this in a migration, as there is no longer use for
a post-migration hook. We keep the initialization of the timestamp_id
function as a Rake task, as it is also needed after db:schema:load (as
db/schema.rb doesn't store Postgres functions).
* Change internal streaming payloads to stringified IDs as well
This is equivalent to 591a9af356faf2d5c7e66e3ec715502796c875cd from
#5019, with an extra change for the addition to FeedManager#unpush.
* Ensure we have a status_id_seq sequence
Apparently this is not a given when specifying a custom ID function,
so now we ensure it gets created. This uses the generic version of this
function to more easily support adding additional tables with timestamp
IDs in the future, although it would be possible to cut this down to a
less generic version if necessary. It is only run during db:schema:load
or the relevant migration, so the overhead is extraordinarily minimal.
* Transition reblogs to new Redis format
This provides a one-way migration to transition old Redis reblog entries
into the new format, with a separate tracking entry for reblogs.
It is not invertible because doing so could (if timestamp IDs are used)
require a database query for each status in each users' feed, which is
likely to be a significant toll on major instances.
* Address review comments from @akihikodaki
No functional changes.
* Additional review changes
* Heredoc cleanup
* Run db:schema:load hooks for test in development
This matches the behavior in Rails'
ActiveRecord::Tasks::DatabaseTasks.each_current_configuration, which
would otherwise break `rake db:setup` in development.
It also moves some functionality out to a library, which will be a good
place to put additional related functionality in the near future.
* Add emoji autosuggest
Some credit goes to glitch-soc/mastodon#149
* Remove server-side shortcode->unicode conversion
* Insert shortcode when suggestion is custom emoji
* Remove remnant of server-side emojis
* Update style of autosuggestions
* Fix wrong emoji filenames generated in autosuggest item
* Do not lazy load emoji picker, as that no longer works
* Fix custom emoji autosuggest
* Fix multiple "Custom" categories getting added to emoji index, only add once
* Custom emoji
- In OStatus: `<link rel="emoji" name="coolcat" href="http://..." />`
- In ActivityPub: `{ type: "Emoji", name: ":coolcat:", href: "http://..." }`
- In REST API: Status object includes `emojis` array (`shortcode`, `url`)
- Domain blocks with reject media stop emojis
- Emoji file up to 50KB
- Web UI handles custom emojis
- Static pages render custom emojis as `<img />` tags
Side effects:
- Undo #4500 optimization, as I needed to modify it to restore
shortcode handling in emojify()
- Formatter#plaintext should now make sure stripped out line-breaks
and paragraphs are replaced with newlines
* Fix emoji at the start not being converted
* Fix ActivityPub handling of replies when LOCAL_DOMAIN ≠ WEB_DOMAIN (#4895)
For all intents and purposes, `local_url?` is used to check if an URL refers
to the Web UI or the various API endpoints of the local instances. Those things
reside on `WEB_DOMAIN` and not `LOCAL_DOMAIN`.
* Change local_url? spec, as all URLs handled by Mastodon are based on WEB_DOMAIN
In before, the method uses stream_entry id as status id, so replied status was wrongly selected.
This PR uses StatusFinder which was introduced with `Api::Web::EmbedsController`.
* Decouple Status#local? from uri being nil
* Replace on-the-fly URI generation with stored URIs
- Generate URI in after_save hook for local statuses
- Use static value in TagManager when available, fallback to tag format
- Make TagManager use ActivityPub::TagManager to understand new format
- Adjust tests
* Use other heuristic for locality of old statuses, do not perform long query
* Exclude tombstone stream entries from Atom feed
* Prevent nil statuses from landing in Pubsubhubbub::DistributionWorker
* Fix URI not being saved (#4818)
* Add more specs for Status
* Save generated uri immediately
and also fix method order to minimize diff.
* Fix alternate HTML URL in Atom
* Fix tests
* Remove not-null constraint from statuses migration to speed it up