<?xml version="1.0" encoding="utf-8" standalone="yes"?>
  <rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
      
      <title>Tor Project status</title>
      <link>https://status.torproject.org/</link>
      <description>Incident history</description>
      <generator>github.com/cstate</generator>
      <language>en</language>
      
      <lastBuildDate>Sat, 20 Sep 2025 17:40:10 +0000</lastBuildDate>
      
      
      
        <atom:link href="https://status.torproject.org/index.xml" rel="self" type="application/rss+xml" />
      
      
      
        <item>
          <title>[Resolved] DNSSEC outage</title>
          <link>https://status.torproject.org/issues/2025-09-20-dnssec-outage/</link>
          <pubDate>Sat, 20 Sep 2025 17:40:10 +0000</pubDate>
          <guid>https://status.torproject.org/issues/2025-09-20-dnssec-outage/</guid>
          <category>2025-09-20T20:32:00-00:00</category>
          <description>&lt;p&gt;All services are affected by a major DNS outage.&lt;/p&gt;
&lt;p&gt;It seems like DNSSEC key rotations happened sooner than expected: the
rotation was planned for November 2025, and it happened today,
breaking DNS resolution for all &lt;code&gt;torproject.org&lt;/code&gt; domains around
2025-09-20 17:40:10UTC.&lt;/p&gt;
&lt;p&gt;Keys were updated in the &lt;code&gt;.org&lt;/code&gt; registry at around 19:32UTC. With the
one hour time-to-live record on &lt;code&gt;.org&lt;/code&gt; DS records, we expect this
outage will be fully resolved by 20:32UTC.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;.net&lt;/code&gt; domains have a much longer time to live (a full day), and
we believe this lessened the impact of the outage on that domain. The
DS keys were updated at 20:32UTC, which gave a window of about three
hours for the outage on that domain, for which the outage is actually
a full day, with service fully returning on 2025-09-21 20:32.&lt;/p&gt;
&lt;p&gt;Details in the GitLab incident &lt;a href=&#34;https://gitlab.torproject.org/tpo/tpa/team/-/issues/42297&#34;&gt;tpo/tpa/team#42297&lt;/a&gt;.&lt;/p&gt;
</description>
        </item>
      
        <item>
          <title>[Resolved] GitLab maintenance</title>
          <link>https://status.torproject.org/issues/2025-06-25-gitlab-downtime/</link>
          <pubDate>Wed, 25 Jun 2025 10:52:41 -0400</pubDate>
          <guid>https://status.torproject.org/issues/2025-06-25-gitlab-downtime/</guid>
          <category>2025-06-25 15:55:00 &#43;0000</category>
          <description>&lt;p&gt;GitLab is having various issues: mail not going out, merge requests
failing. On top of that, it&amp;rsquo;s running out of disk space.&lt;/p&gt;
&lt;p&gt;We&amp;rsquo;re doing an emergency maintenance on the server, we&amp;rsquo;re hoping for
this to resolve within one or two hours.&lt;/p&gt;
&lt;p&gt;Details of this are available in &lt;a href=&#34;https://gitlab.torproject.org/tpo/tpa/team/-/issues/42218&#34;&gt;tpo/tpa/team#42218&lt;/a&gt; which, of
course, might not be available right now &lt;a href=&#34;http://web.archive.org/web/20250625145534/https://gitlab.torproject.org/tpo/tpa/team/-/issues/42218&#34;&gt;archive.org copy&lt;/a&gt;.&lt;/p&gt;
</description>
        </item>
      
        <item>
          <title>[Resolved] Graphs on Metrics website are not updated beyond 2025-05-31</title>
          <link>https://status.torproject.org/issues/2025-06-10-graphs-on-metrics-website/</link>
          <pubDate>Tue, 10 Jun 2025 11:00:00 +0000</pubDate>
          <guid>https://status.torproject.org/issues/2025-06-10-graphs-on-metrics-website/</guid>
          <category>2025-06-19 08:25:00</category>
          <description>&lt;p&gt;We&amp;rsquo;re aware of a problem that prevents the graphs on metrics.torproject.org from updating beyond 2025-05-31. Requesting data and graphs for timeframes before that is working fine, though. We are actively &lt;a href=&#34;https://gitlab.torproject.org/tpo/network-health/metrics/website/-/issues/40128&#34;&gt;working on resolving this issue&lt;/a&gt; as soon as possible.&lt;/p&gt;
</description>
        </item>
      
        <item>
          <title>[Resolved] BridgeDB graphs on Metrics website are not shown</title>
          <link>https://status.torproject.org/issues/2025-04-10-bridgedb-graphs-on-metrics-website/</link>
          <pubDate>Thu, 10 Apr 2025 13:00:00 +0000</pubDate>
          <guid>https://status.torproject.org/issues/2025-04-10-bridgedb-graphs-on-metrics-website/</guid>
          <category>2025-06-25 16:00:00</category>
          <description>&lt;p&gt;We&amp;rsquo;re aware of a problem that prevents the BridgeDB related information on metrics.torproject.org from being rendered. This affects the BridgeDB requests by transport and distributor graphs in particular. The underlying data is available in the respective .csv files, though. We are actively &lt;a href=&#34;https://gitlab.torproject.org/tpo/network-health/metrics/website/-/issues/40123&#34;&gt;working on resolving this issue&lt;/a&gt; as soon as possible.&lt;/p&gt;
</description>
        </item>
      
        <item>
          <title>[Resolved] Issues with Ubuntu packages on deb.torproject.org and its Onion Service</title>
          <link>https://status.torproject.org/issues/2025-02-25-ubuntu-packages/</link>
          <pubDate>Tue, 25 Feb 2025 10:02:59 -0500</pubDate>
          <guid>https://status.torproject.org/issues/2025-02-25-ubuntu-packages/</guid>
          <category>2025-02-28T05:23:07&#43;0000</category>
          <description>&lt;p&gt;We are aware of two issues related to deb.torproject.org that we are
working on fixing:&lt;/p&gt;
&lt;p&gt;The first issue is the Ubuntu packages are missing when using apt. Tor
has, for a while, had flaky CI across our entire infrastructure due to
Docker upstream adding rate-limiting for their images, and our
different teams have been working on moving to our own Docker images
with our own container registry.&lt;/p&gt;
&lt;p&gt;Things went a bit too fast here, but we expect to have x86-64 and
aarch64 packages for the Ubuntu releases out again very soon. This
should NOT negatively impact your current installs of Tor, but apt
will be unhappy when you do an &lt;code&gt;apt update&lt;/code&gt; until this is resolved
(expected eta: 1 day).&lt;/p&gt;
&lt;p&gt;Followup on this issue in &lt;a href=&#34;https://gitlab.torproject.org/tpo/tpa/team/-/issues/42052&#34;&gt;tpo/tpa/team#42052&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;In addition to the situation with the Ubuntu packages, we are also
aware of an issue with the Onion Service provided for
deb.torproject.org being unreachable/unstable, see
&lt;a href=&#34;https://gitlab.torproject.org/tpo/tpa/team/-/issues/42054&#34;&gt;tpo/tpa/team#42054&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Update: The Ubuntu builds for &lt;code&gt;focal&lt;/code&gt;,&lt;code&gt;jammy&lt;/code&gt;,&lt;code&gt;noble&lt;/code&gt; and &lt;code&gt;oracular&lt;/code&gt;
are back, as are the &lt;code&gt;arm64&lt;/code&gt; builds for all suites on both Debian and
Ubuntu.&lt;/p&gt;
&lt;p&gt;We dropped the &lt;code&gt;lunar&lt;/code&gt; and &lt;code&gt;mantic&lt;/code&gt; Ubuntu releases after we
experienced some issues building these packages. Since both releases
have been out of support for several months at this point, hopefully
that won&amp;rsquo;t be too much of an issue.&lt;/p&gt;
&lt;p&gt;If you notice the unfamiliar package version suffix, &lt;code&gt;+tpo1&lt;/code&gt;, that&amp;rsquo;s
expected: those are packages specific to &lt;code&gt;deb.torproject.org&lt;/code&gt;.&lt;/p&gt;
</description>
        </item>
      
        <item>
          <title>[Resolved] Router maintenance at Hetzner</title>
          <link>https://status.torproject.org/issues/2024-12-03-hetzner-router-maintenance/</link>
          <pubDate>Tue, 03 Dec 2024 03:30:00 +0000</pubDate>
          <guid>https://status.torproject.org/issues/2024-12-03-hetzner-router-maintenance/</guid>
          <category>2024-12-03 5:30:00 &#43;0000</category>
          <description>&lt;p&gt;Hetzner has planned an emergency maintenance window on all of their routers,
which will cause a network outage on all of our hosts in their datacenters. The
maintenance window is planned on December 3rd 2024 from 3:30 UTC to 5:30 UTC,
during which the network may experience spurious outages.&lt;/p&gt;
&lt;p&gt;Services should come back online automatically as soon as network connectivity
is restored by Hetzner.&lt;/p&gt;
</description>
        </item>
      
        <item>
          <title>[Resolved] GitLab migration to another machine cluster</title>
          <link>https://status.torproject.org/issues/2024-11-28-gitlab-migration/</link>
          <pubDate>Thu, 28 Nov 2024 18:00:00 +0000</pubDate>
          <guid>https://status.torproject.org/issues/2024-11-28-gitlab-migration/</guid>
          <category>2024-11-29 14:04:00 &#43;0000</category>
          <description>&lt;p&gt;Starting on November 28th at 18:00 UTC, gitlab.torproject.org will be taken
offline in order to migrate it to our other machine cluster.&lt;/p&gt;
&lt;p&gt;The transfer time is currently estimated at 18h, so gitlab will be coming online
on the next day, friday 29th. If we&amp;rsquo;re lucky the transfer might finish sooner.&lt;/p&gt;
&lt;p&gt;If you have any issues during that time, please reach out to us on IRC or via
email.&lt;/p&gt;
</description>
        </item>
      
        <item>
          <title>[Resolved] Nextcloud down</title>
          <link>https://status.torproject.org/issues/2024-11-25-nextcloud/</link>
          <pubDate>Mon, 25 Nov 2024 13:15:00 +0000</pubDate>
          <guid>https://status.torproject.org/issues/2024-11-25-nextcloud/</guid>
          <category>2024-11-25 13:31:00 &#43;0000</category>
          <description>&lt;p&gt;Tor&amp;rsquo;s Nextcloud (&lt;a href=&#34;https://nc.torproject.net&#34;&gt;https://nc.torproject.net&lt;/a&gt;) is down since at least Nov 25th
13:01 UTC. Our service provider (Riseup) has been notified and we&amp;rsquo;ll update
this page as soon as we have more information.&lt;/p&gt;
&lt;p&gt;UPDATE: Service is back, we don&amp;rsquo;t yet have info about what happened.&lt;/p&gt;
</description>
        </item>
      
        <item>
          <title>[Resolved] Switching mail servers</title>
          <link>https://status.torproject.org/issues/2024-11-25-switching-mail-servers/</link>
          <pubDate>Mon, 25 Nov 2024 11:00:00 +0000</pubDate>
          <guid>https://status.torproject.org/issues/2024-11-25-switching-mail-servers/</guid>
          <category>2024-11-25 15:50:00 &#43;0000</category>
          <description>&lt;p&gt;On November 25st 2024, starting at 11:00 UTC, we will be switching to new
mail servers for receiving and forwarding mail sent to torproject.org. If all
goes well, there will be no downtime and deliverability to external
providers like gmail will improve.&lt;/p&gt;
&lt;p&gt;If all does not go well, please reach out on IRC or file an issue on Gitlab,
do not mail us since e-mail may obviously not be reliable. For more details on
how to get in touch and how to report e-mail problems, see:
&lt;a href=&#34;https://gitlab.torproject.org/tpo/tpa/team/-/wikis/support&#34;&gt;https://gitlab.torproject.org/tpo/tpa/team/-/wikis/support&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Update: all went well :)&lt;/p&gt;
</description>
        </item>
      
        <item>
          <title>[Resolved] Power maintenance at Qupra</title>
          <link>https://status.torproject.org/issues/2024-11-21-qupra-power-maintenance/</link>
          <pubDate>Thu, 21 Nov 2024 00:00:00 +0000</pubDate>
          <guid>https://status.torproject.org/issues/2024-11-21-qupra-power-maintenance/</guid>
          <category>2024-11-21 23:59:00 &#43;0000</category>
          <description>&lt;p&gt;Qupra had planned a power maintenance window for their entire datacenter, which
could have caused outage on two of the Tails servers that are hosted there. The
maintenance window was planned on November 21st 2024, but no exact time window
has been communicated.&lt;/p&gt;
&lt;p&gt;The maintenance did not result in any downtime.&lt;/p&gt;
</description>
        </item>
      
        <item>
          <title>[Resolved] Riseup network unreachable</title>
          <link>https://status.torproject.org/issues/2024-11-12-riseup-provider/</link>
          <pubDate>Tue, 12 Nov 2024 04:00:00 +0000</pubDate>
          <guid>https://status.torproject.org/issues/2024-11-12-riseup-provider/</guid>
          <category>2024-11-12 20:20:00 &#43;0000</category>
          <description>&lt;p&gt;The network in which our Nextcloud instance and large sections of the
Tails infrastructure are hosted is currently unreachable.&lt;/p&gt;
&lt;p&gt;Riseup is working with their upstream providers to resolve the issue,
but there is currently no clear indication when it will be resolved. More
information can be found on Riseup&amp;rsquo;s own &lt;a href=&#34;https://riseupstatus.net/issues/2024-11-11-partial-network-outage/&#34;&gt;status page&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Update, 2024-11-12, 13:10 UTC: we have moved the Nextcloud instance to a
new IP address and it should become available again once DNS caches have
expired (this may take up to three hours).&lt;/p&gt;
</description>
        </item>
      
        <item>
          <title>[Resolved] Mailman 3 upgrade</title>
          <link>https://status.torproject.org/issues/2024-10-29-mailman3-upgrade/</link>
          <pubDate>Mon, 04 Nov 2024 14:00:00 +0000</pubDate>
          <guid>https://status.torproject.org/issues/2024-10-29-mailman3-upgrade/</guid>
          <category>2024-11-04 21:00:00 &#43;0000</category>
          <description>&lt;p&gt;TL;DR: Mailman 3 upgrade happened on November 4th, starting at 14:00
UTC, and ending around 21:00 UTC. Report issues on GitLab.&lt;/p&gt;
&lt;p&gt;As mentioned in early October, we&amp;rsquo;re in the process of upgrading our
main mail server, which includes upgrading to the shiny new Mailman 3
platform. This will involve short per-list outage as each migrates
over to the new server. It&amp;rsquo;s unclear how long the maintenance will
last, but assume mailing lists will be disrupted all day, although
each migration should only take a couple of minutes each.&lt;/p&gt;
&lt;p&gt;We have, right now, a prototype mailman 3 server available at:&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://lists-01.torproject.org/&#34;&gt;https://lists-01.torproject.org/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The TPA mailing list has been succesfully migrated already and next
week, I&amp;rsquo;ll start migrating the other mailing lists (including this
one!).&lt;/p&gt;
&lt;p&gt;Be warned that Mailman 3 is a significant upgrade from Mailman 2. There
are some great things (like unified authentication), and some less great
things (like a more complex design and &amp;ldquo;shinier&amp;rdquo; web interface that
might not be everyone&amp;rsquo;s taste).&lt;/p&gt;
&lt;p&gt;As a reminder, we&amp;rsquo;re doing this upgrade a little rushed because the main
mail server is now unsupported for security upgrades. See the details of
the proposal here:&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://gitlab.torproject.org/tpo/tpa/team/-/wikis/policy/tpa-rfc-71-emergency-email-deployments-round-2&#34;&gt;https://gitlab.torproject.org/tpo/tpa/team/-/wikis/policy/tpa-rfc-71-emergency-email-deployments-round-2&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&amp;hellip; with the upgrade work being tracked in this issue:&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://gitlab.torproject.org/tpo/tpa/team/-/issues/40471&#34;&gt;https://gitlab.torproject.org/tpo/tpa/team/-/issues/40471&lt;/a&gt;&lt;/p&gt;
</description>
        </item>
      
        <item>
          <title>[Resolved] schleuder maintenance</title>
          <link>https://status.torproject.org/issues/2024-10-07-schleuder-migration/</link>
          <pubDate>Mon, 07 Oct 2024 12:00:00 +0000</pubDate>
          <guid>https://status.torproject.org/issues/2024-10-07-schleuder-migration/</guid>
          <category>2024-10-07 12:02:00 &#43;0000</category>
          <description>&lt;p&gt;Schleuder lists administration will be down for maintenance on Monday
around 12:00 UTC, equivalent to 05:00 US/Pacific, 09:00 America/Sao_Paulo,
08:00 US/Eastern, 14:00 Europe/Amsterdam.&lt;/p&gt;
&lt;p&gt;Schleuder lists will be &lt;a href=&#34;https://gitlab.torproject.org/tpo/tpa/team/-/issues/41796&#34;&gt;migrated to a new server&lt;/a&gt; with a more recent
schleuder version. Mail delivery should be unaffected, but changes in the
list configuration or keys made during the maintenance window may be lost.&lt;/p&gt;
&lt;p&gt;The migration is expected to take no more than one hour, but no less than 15
minutes.&lt;/p&gt;
&lt;p&gt;See the discussion issue for more information and feedback:&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://gitlab.torproject.org/tpo/tpa/team/-/issues/41796&#34;&gt;https://gitlab.torproject.org/tpo/tpa/team/-/issues/41796&lt;/a&gt;&lt;/p&gt;
</description>
        </item>
      
        <item>
          <title>[Resolved] donate website maintenance</title>
          <link>https://status.torproject.org/issues/2024-09-26-donate-migration/</link>
          <pubDate>Wed, 02 Oct 2024 16:00:00 +0000</pubDate>
          <guid>https://status.torproject.org/issues/2024-09-26-donate-migration/</guid>
          <category>2024-10-02 18:40:00 &#43;0000</category>
          <description>&lt;p&gt;Donation site will be down for maintenance on Wednesday around 14:00
UTC, equivalent to 07:00 US/Pacific, 11:00 America/Sao_Paulo, 10:00
US/Eastern, 16:00 Europe/Amsterdam.&lt;/p&gt;
&lt;p&gt;Update: maintenance was delayed by 2 hours, so this is now 09:00
US/Pacific, 13:00 America/Sao_Paulo, 12:00 US/Eastern, 16:00 UTC,
18:00 Europe/Amsterdam.&lt;/p&gt;
&lt;p&gt;We&amp;rsquo;re having &lt;a href=&#34;https://gitlab.torproject.org/tpo/web/donate-neo/-/issues/134&#34;&gt;latency issues&lt;/a&gt; with the main donate site. We hope
that migrating it from our data center in Germany to the one in Dallas
will help fix those issues as it will be physically closer to the rest
of the cluster.&lt;/p&gt;
&lt;p&gt;Outage is expected to take no more than two hours, but no less than 15
minutes.&lt;/p&gt;
&lt;p&gt;See the discussion issue for more information and feedback:&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://gitlab.torproject.org/tpo/tpa/team/-/issues/41775&#34;&gt;https://gitlab.torproject.org/tpo/tpa/team/-/issues/41775&lt;/a&gt;&lt;/p&gt;
</description>
        </item>
      
        <item>
          <title>[Resolved] Network Performance Issues</title>
          <link>https://status.torproject.org/issues/2024-05-14-network-performance-issues/</link>
          <pubDate>Tue, 14 May 2024 00:00:00 +0000</pubDate>
          <guid>https://status.torproject.org/issues/2024-05-14-network-performance-issues/</guid>
          <category>2024-06-28 00:00:00 &#43;0000</category>
          <description>&lt;p&gt;We&amp;rsquo;ve been experiencing an unusally high load on the Tor network during the last
couple of weeks, which impacts the performance of onion services and non-onion
services traffic. We are currently investigating potential mitigations.&lt;/p&gt;
</description>
        </item>
      
        <item>
          <title>[Resolved] Tor Weather outage</title>
          <link>https://status.torproject.org/issues/2024-03-19-tor-weather-outage/</link>
          <pubDate>Tue, 19 Mar 2024 15:00:00 +0000</pubDate>
          <guid>https://status.torproject.org/issues/2024-03-19-tor-weather-outage/</guid>
          <category>2024-03-25 08:58:00 &#43;0000</category>
          <description>&lt;p&gt;We are in the process of upgrading Tor Weather to version 2.0. For more information about the details and expected downtime, see the &lt;a href=&#34;https://gitlab.torproject.org/tpo/tpa/team/-/issues/41556&#34;&gt;tracking ticket&lt;/a&gt;. We are sorry for the inconvenience.&lt;/p&gt;
</description>
        </item>
      
        <item>
          <title>[Resolved] Tor Check outage</title>
          <link>https://status.torproject.org/issues/2024-01-31-check-outage/</link>
          <pubDate>Wed, 31 Jan 2024 03:25:07 +0000</pubDate>
          <guid>https://status.torproject.org/issues/2024-01-31-check-outage/</guid>
          <category>2024-01-31 15:50:42 &#43;0000</category>
          <description>&lt;p&gt;We are currently investigating a Tor Check outage that started after upgrading the underlying operating system to the latest Debian Stable release. Details and more context can be found in our &lt;a href=&#34;https://gitlab.torproject.org/tpo/network-health/metrics/tor-check/-/issues/40017&#34;&gt;Gitlab&lt;/a&gt; &lt;a href=&#34;https://gitlab.torproject.org/tpo/tpa/team/-/issues/41252#note_2990660&#34;&gt;bug tracker&lt;/a&gt;. Sorry for the inconvenience.&lt;/p&gt;
</description>
        </item>
      
        <item>
          <title>[Resolved] ExoneraTor outage</title>
          <link>https://status.torproject.org/issues/2024-01-02-exonerator-outage/</link>
          <pubDate>Tue, 02 Jan 2024 19:25:00 +0000</pubDate>
          <guid>https://status.torproject.org/issues/2024-01-02-exonerator-outage/</guid>
          <category>2024-02-07 15:36:56 &#43;0000</category>
          <description>&lt;p&gt;We are currently investigating an unusually high and sustained load on the ExoneraTor service, causing most queries to time out, resulting in a 502 &amp;ldquo;Proxy Error&amp;rdquo; page. Details and more context can be found in our &lt;a href=&#34;https://gitlab.torproject.org/tpo/tpa/team/-/issues/41507&#34;&gt;Gitlab bug tracker&lt;/a&gt; Sorry for the inconvenience.&lt;/p&gt;
</description>
        </item>
      
        <item>
          <title>[Resolved] Gitlab and CollecTor outage</title>
          <link>https://status.torproject.org/issues/2023-12-06-gitlab-collector-outage/</link>
          <pubDate>Wed, 06 Dec 2023 14:00:00 +0000</pubDate>
          <guid>https://status.torproject.org/issues/2023-12-06-gitlab-collector-outage/</guid>
          <category>2023-12-06 15:29:09 &#43;0000</category>
          <description>&lt;p&gt;We&amp;rsquo;ve experienced heavy load and unresponsiveness on some of our
services (e.g. Gitlab and CollecTor) leading to outages and
disruptions.&lt;/p&gt;
&lt;p&gt;&lt;!-- raw HTML omitted --&gt;The issue seems to have resolved itself, investigation seemed to show
this was a routing issue upstream.&lt;!-- raw HTML omitted --&gt;&lt;/p&gt;
&lt;p&gt;Update: issue have crept up again, root cause was
elevated temperature with the hard drives on the affected
server. Upstream has replaced fans in the server and situation has
returned to normal.&lt;/p&gt;
&lt;p&gt;See &lt;a href=&#34;https://gitlab.torproject.org/tpo/tpa/team/-/issues/41429&#34;&gt;issue tpo/tpa/team#41429&lt;/a&gt; for detailed analysis and updates.&lt;/p&gt;
</description>
        </item>
      
        <item>
          <title>[Resolved] Tor Weather data loss</title>
          <link>https://status.torproject.org/issues/2023-11-08-tor-weather-outage/</link>
          <pubDate>Fri, 27 Oct 2023 06:23:11 +0000</pubDate>
          <guid>https://status.torproject.org/issues/2023-11-08-tor-weather-outage/</guid>
          <category>2023-11-08 18:29:00 &#43;0000</category>
          <description>&lt;p&gt;The &lt;a href=&#34;https://weather.torproject.org/&#34;&gt;Tor Weather service&lt;/a&gt; broke during an &lt;a href=&#34;https://gitlab.torproject.org/tpo/tpa/team/-/issues/41252&#34;&gt;upgrade to Debian
Bookworm&lt;/a&gt;. The database was destroyed (but eventually recovered from
backups) and the service was down for about a week.&lt;/p&gt;
&lt;p&gt;Specifically, all changes made after 2023-10-27 06:23:11 UTC have been
lost. The service itself went offline some time before 2023-10-31
20:33UTC, therefore about 4 days of data was lost. The data loss was
due to a flaw in the &lt;a href=&#34;https://gitlab.torproject.org/tpo/tpa/team/-/wikis/howto/upgrades/bookworm#postgresql-upgrades&#34;&gt;bookworm upgrade procedure&lt;/a&gt;, since then
&lt;a href=&#34;https://gitlab.torproject.org/tpo/tpa/wiki-replica/-/commit/7f50f2989d7f98ff716844f416aa487fe74fd77c&#34;&gt;corrected&lt;/a&gt;, combined with expiration of the continuous backups
which would normally have allowed full recovery.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;If you have registered or made any changes on Tor Weather after
October 27th, you will need to redo those changes.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The service was restored on November 8th (2023-11-08 18:29UTC) from
the October 27th backup. So the service was offline for a little over
8 days, from October 31th to November 8th.&lt;/p&gt;
&lt;p&gt;A more detailed discussion and post-mortem can be found in &lt;a href=&#34;https://gitlab.torproject.org/tpo/tpa/team/-/issues/41388&#34;&gt;issue
41388&lt;/a&gt;, including a &lt;a href=&#34;https://gitlab.torproject.org/tpo/tpa/team/-/issues/41388/timeline&#34;&gt;full timeline of the incident&lt;/a&gt;, heroic
&lt;a href=&#34;https://gitlab.torproject.org/tpo/tpa/team/-/wikis/howto/backup#recovering-deleted-files&#34;&gt;deleted file recovery procedures&lt;/a&gt;, and &lt;a href=&#34;https://gitlab.torproject.org/tpo/tpa/team/-/issues/41388/#future-improvements&#34;&gt;future improvements&lt;/a&gt; to
our systems that should keep this specific problem from happening
again.&lt;/p&gt;
&lt;p&gt;It is not in our habit to lose data, and we apologize profusely for
this mishap.&lt;/p&gt;
</description>
        </item>
      
        <item>
          <title>[Resolved] donate website maintenance</title>
          <link>https://status.torproject.org/issues/2023-05-30-donate-website-maintenance/</link>
          <pubDate>Tue, 30 May 2023 03:00:00 +0000</pubDate>
          <guid>https://status.torproject.org/issues/2023-05-30-donate-website-maintenance/</guid>
          <category>2023-05-30 04:00:00 &#43;0000</category>
          <description>&lt;p&gt;Planned maintenance on the donate.torproject.org website will make it
inaccessible for a few hours.&lt;/p&gt;
&lt;p&gt;The service should be restored at 2023-05-30 05:00:00 +0000.&lt;/p&gt;
&lt;p&gt;For details, see &lt;a href=&#34;https://gitlab.torproject.org/tpo/tpa/team/-/issues/41109&#34;&gt;https://gitlab.torproject.org/tpo/tpa/team/-/issues/41109&lt;/a&gt;&lt;/p&gt;
</description>
        </item>
      
        <item>
          <title>[Resolved] donate website maintenance</title>
          <link>https://status.torproject.org/issues/2023-05-08-donate-website-maintenance/</link>
          <pubDate>Mon, 08 May 2023 02:00:00 +0000</pubDate>
          <guid>https://status.torproject.org/issues/2023-05-08-donate-website-maintenance/</guid>
          <category>2023-05-08 04:45:00 &#43;0000</category>
          <description>&lt;p&gt;Planned maintenance on the donate.torproject.org website will make it
inaccessible for a few hours.&lt;/p&gt;
&lt;p&gt;The service should be restored at 2023-05-08 05:00:00 +0000.&lt;/p&gt;
&lt;p&gt;For details, see &lt;a href=&#34;https://gitlab.torproject.org/tpo/tpa/team/-/issues/41109&#34;&gt;https://gitlab.torproject.org/tpo/tpa/team/-/issues/41109&lt;/a&gt;&lt;/p&gt;
</description>
        </item>
      
        <item>
          <title>[Resolved] Tor Weather outage</title>
          <link>https://status.torproject.org/issues/2023-04-04-tor-weather-outage/</link>
          <pubDate>Tue, 04 Apr 2023 21:00:00 +0000</pubDate>
          <guid>https://status.torproject.org/issues/2023-04-04-tor-weather-outage/</guid>
          <category>2023-04-12 15:46:00 &#43;0000</category>
          <description>&lt;p&gt;We needed to take down Tor Weather temporarily as it was &lt;a href=&#34;https://gitlab.torproject.org/tpo/tpa/team/-/issues/41118&#34;&gt;spamming the
tor-commits mailing list&lt;/a&gt;. Once we have an acceptable solution to that problem it will be re-enabled. Sorry for the inconvenience.&lt;/p&gt;
</description>
        </item>
      
        <item>
          <title>[Resolved] Tor Weather outage</title>
          <link>https://status.torproject.org/issues/2023-03-30-tor-weather-outage/</link>
          <pubDate>Thu, 30 Mar 2023 12:41:00 +0000</pubDate>
          <guid>https://status.torproject.org/issues/2023-03-30-tor-weather-outage/</guid>
          <category>2023-03-31 08:46:35 &#43;0000</category>
          <description>&lt;p&gt;Shortly after &lt;a href=&#34;https://lists.torproject.org/pipermail/tor-relays/2023-March/021110.html&#34;&gt;launching&lt;/a&gt; our new Tor Weather service the relay operator tornth &lt;a href=&#34;https://gitlab.torproject.org/tpo/network-health/tor-weather/-/issues/57&#34;&gt;found a serious violation of privacy expectations&lt;/a&gt; that resulted in suspending the Tor Weather service until that issue gets fixed. We are sorry for that inconvenience.&lt;/p&gt;
</description>
        </item>
      
        <item>
          <title>[Resolved] outage at main provider</title>
          <link>https://status.torproject.org/issues/2023-02-04-hetzner-outage/</link>
          <pubDate>Sat, 04 Feb 2023 01:57:31 +0000</pubDate>
          <guid>https://status.torproject.org/issues/2023-02-04-hetzner-outage/</guid>
          <category>2023-02-04 03:33:46 &#43;0000</category>
          <description>&lt;p&gt;We are witnessing a large outage at our main service provider,
Hetzner. According to the information we have gathered so far, four
switches (4!) have failed and that affects four (yes, again, 4!) of the
servers in the 8-node Ganeti cluster.&lt;/p&gt;
&lt;p&gt;Affected servers:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;check-01&lt;/li&gt;
&lt;li&gt;chives&lt;/li&gt;
&lt;li&gt;colchicifolium&lt;/li&gt;
&lt;li&gt;cupani&lt;/li&gt;
&lt;li&gt;fsn-node-01&lt;/li&gt;
&lt;li&gt;fsn-node-02&lt;/li&gt;
&lt;li&gt;fsn-node-04&lt;/li&gt;
&lt;li&gt;fsn-node-07&lt;/li&gt;
&lt;li&gt;gitlab-02&lt;/li&gt;
&lt;li&gt;gnt-fsn&lt;/li&gt;
&lt;li&gt;henryi&lt;/li&gt;
&lt;li&gt;loghost01&lt;/li&gt;
&lt;li&gt;majus&lt;/li&gt;
&lt;li&gt;materculae&lt;/li&gt;
&lt;li&gt;media-01&lt;/li&gt;
&lt;li&gt;onionoo-backend-01&lt;/li&gt;
&lt;li&gt;onionoo-backend-02&lt;/li&gt;
&lt;li&gt;onionoo-frontend-02&lt;/li&gt;
&lt;li&gt;perdulce&lt;/li&gt;
&lt;li&gt;polyanthum&lt;/li&gt;
&lt;li&gt;relay-01&lt;/li&gt;
&lt;li&gt;static-master-fsn&lt;/li&gt;
&lt;li&gt;staticiforme&lt;/li&gt;
&lt;li&gt;submit-01&lt;/li&gt;
&lt;li&gt;tbb-nightlies-master&lt;/li&gt;
&lt;li&gt;weather-01&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Upstream resolved the situation after a few hours of
downtime. According to Hetzner it was &amp;ldquo;a short disruption of a line
card in one of our access routers&amp;rdquo;. Details of the incident are
available in &lt;a href=&#34;https://gitlab.torproject.org/tpo/tpa/team/-/issues/41057&#34;&gt;tpo/tpa/team#41057&lt;/a&gt;.&lt;/p&gt;
</description>
        </item>
      
        <item>
          <title>[Resolved] DNS services disruption</title>
          <link>https://status.torproject.org/issues/2022-12-07-dns/</link>
          <pubDate>Wed, 07 Dec 2022 03:15:00 +0000</pubDate>
          <guid>https://status.torproject.org/issues/2022-12-07-dns/</guid>
          <category>2022-12-08 18:30:00</category>
          <description>&lt;p&gt;We are currently experiencing a higher than normal load on our DNS
infrastructure, and have received several reports of DNS resolution
failures, especially from users of Google DNS.&lt;/p&gt;
&lt;p&gt;Update, 2022-12-08: We have deployed mitigations that appear to have
successfully restored the service to usual levels.&lt;/p&gt;
&lt;p&gt;See &lt;a href=&#34;https://gitlab.torproject.org/tpo/tpa/team/-/issues/40996&#34;&gt;issue tpo/tpa/team#40996&lt;/a&gt; for details.&lt;/p&gt;
</description>
        </item>
      
        <item>
          <title>[Resolved] Email delivery problems</title>
          <link>https://status.torproject.org/issues/2022-11-30-mail-delivery/</link>
          <pubDate>Wed, 30 Nov 2022 20:00:00 +0000</pubDate>
          <guid>https://status.torproject.org/issues/2022-11-30-mail-delivery/</guid>
          <category>2022-12-15 21:48:00</category>
          <description>&lt;p&gt;We have had repeated reports over the past weeks of delivery failures,
particularly at Gmail but it&amp;rsquo;s possible it affects mail delivery
across the board.&lt;/p&gt;
&lt;p&gt;Update, 2022-12-05: we are deploying emergency workarounds, see &lt;a href=&#34;https://gitlab.torproject.org/tpo/tpa/team/-/issues/40981&#34;&gt;issue
tpo/tpa/team#40981&lt;/a&gt; for progress updates.&lt;/p&gt;
&lt;p&gt;Update, 2022-12-14: DMARC, SPF records and DKIM signatures on outgoing
mail hosts deployed, reputation improved. May impact negatively users
&lt;em&gt;not&lt;/em&gt; using the submission server.&lt;/p&gt;
&lt;p&gt;Update, 2022-12-15: mail services considered restored, see
&lt;a href=&#34;https://gitlab.torproject.org/tpo/tpa/team/-/issues/41009&#34;&gt;TPA-RFC-45&lt;/a&gt; or further improvements.&lt;/p&gt;
</description>
        </item>
      
        <item>
          <title>[Resolved] Network DDoS</title>
          <link>https://status.torproject.org/issues/2022-06-09-network-ddos/</link>
          <pubDate>Thu, 09 Jun 2022 14:00:00 +0000</pubDate>
          <guid>https://status.torproject.org/issues/2022-06-09-network-ddos/</guid>
          <category>2023-05-09 09:00:00</category>
          <description>&lt;p&gt;We are experiencing a network-wide DDoS attempt impacting the
performance of the Tor network, which includes both onion services and
non-onion services traffic. We are currently investigating potential
mitigations.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Update, 2023-04-05&lt;/strong&gt;: The DDoS has significantly reduced in volume over the last month, although there are intermittent spikes that can still affect the performance of relays that get hit by them. Overall performance has improved, but can occasionally be slower when using affected relays. We are making significant progress on implementing our &lt;a href=&#34;https://gitlab.torproject.org/tpo/core/tor/-/issues/40634&#34;&gt;Proof of Work defense&lt;/a&gt;, which should eliminate the incentive for much of these attacks. Other, more general DDoS defense work will happen after that.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Update, 2022-10-25 12:00UTC&lt;/strong&gt;: There are several different kinds of overload going on that we are working on addressing. We are seeing performance degredation from an overload of exit connections and onion service circuit handshakes, causing our relays to deny circuit creations and degrade performance. Until we are able to determine mechanisms for rate limiting this activity, through development, experimentation, and testing, this DoS activity will continue to cause performance and reliability problems on the network. For details, see a &lt;a href=&#34;https://lists.torproject.org/pipermail/tor-relays/2022-October/020858.html&#34;&gt;recent thread&lt;/a&gt; on our tor-relays mailing list.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Update, 2022-07-07 13:00UTC&lt;/strong&gt;: the overload we saw in the past few weeks seems
to be gone now and performance improved over the last couple of days. However,
the DDoS is not over yet, but changed its nature. We are currently investigating
how we can mitigate the new overload, which is affecting, e.g., our directory
authorities. For details, see a
&lt;a href=&#34;https://lists.torproject.org/pipermail/tor-relays/2022-July/020686.html&#34;&gt;recent thread&lt;/a&gt;
on our tor-relays mailing list.&lt;/p&gt;
</description>
        </item>
      
        <item>
          <title>[Resolved] routing issues at main provider</title>
          <link>https://status.torproject.org/issues/2022-01-25-routing-issues/</link>
          <pubDate>Tue, 25 Jan 2022 06:00:00 +0000</pubDate>
          <guid>https://status.torproject.org/issues/2022-01-25-routing-issues/</guid>
          <category>2022-01-27 18:58:00 &#43;0000</category>
          <description>&lt;p&gt;We are experiencing intermittent network outages that typically
resolve themselves within a few hours. Preliminary investigations seem
to point at routing issues at Hetzner, but we have yet to get a solid
diagnostic. We&amp;rsquo;re following that issue in &lt;a href=&#34;https://gitlab.torproject.org/tpo/tpa/team/-/issues/40601&#34;&gt;issue 40601&lt;/a&gt;.&lt;/p&gt;
</description>
        </item>
      
        <item>
          <title>[Resolved] Disruption of Metrics website and relay search</title>
          <link>https://status.torproject.org/issues/2021-06-05-metris-website/</link>
          <pubDate>Thu, 03 Jun 2021 08:00:00 +0000</pubDate>
          <guid>https://status.torproject.org/issues/2021-06-05-metris-website/</guid>
          <category>2021-06-14 16:00:00</category>
          <description>&lt;p&gt;We&amp;rsquo;re currently facing stability issues with respect to our Metrics website and relay search. We are actively &lt;a href=&#34;https://gitlab.torproject.org/tpo/metrics/team/-/issues/15&#34;&gt;working on resolving those issues&lt;/a&gt;
as soon as possible.&lt;/p&gt;
</description>
        </item>
      
        <item>
          <title>[Resolved] V2 Onion Services deprecation</title>
          <link>https://status.torproject.org/issues/2021-05-6-v2-deprecation/</link>
          <pubDate>Thu, 06 May 2021 16:45:00 +0000</pubDate>
          <guid>https://status.torproject.org/issues/2021-05-6-v2-deprecation/</guid>
          <category>2021-11-08 00:00:00</category>
          <description>&lt;p&gt;&lt;strong&gt;If you are an onion site administrator, you must upgrade to v3 onion services as soon as possible.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;As we &lt;a href=&#34;https://blog.torproject.org/v2-deprecation-timeline&#34;&gt;announced last year&lt;/a&gt;, v2 onion services will be deprecated and obsolete in Tor 0.4.6.x. As of April 2021, Tor Browser Alpha uses this version of Tor and v2 addresses no longer work in this and future versions of Tor Browser Alpha.&lt;/p&gt;
&lt;p&gt;When Tor Browser stable moves to Tor 0.4.6.x in October 2021, v2 onion addresses will be completely unreachable.&lt;/p&gt;
&lt;p&gt;Why are we deprecating v2 onion services? Safety. Technologies used in v2 onion services are vulnerable to different kinds of attacks, and v2 onion services are no longer being developed or maintained. The new version of onion services provides improved encryption and enhanced privacy for administrators and users.&lt;/p&gt;
&lt;p&gt;It&amp;rsquo;s critical that onion service administrators migrate to v3 onion services and work to inform users about this change as soon as possible.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.torproject.org/v2-deprecation-timeline&#34;&gt;Read more about the deprecation on our blog&lt;/a&gt;.&lt;/p&gt;
</description>
        </item>
      
        <item>
          <title>[Resolved] Disruption of v3 onion services and consensus building</title>
          <link>https://status.torproject.org/issues/2021-01-28-dir-auth/</link>
          <pubDate>Wed, 27 Jan 2021 23:00:00 +0000</pubDate>
          <guid>https://status.torproject.org/issues/2021-01-28-dir-auth/</guid>
          <category>2021-01-29 05:30:00</category>
          <description>&lt;p&gt;We&amp;rsquo;re currently facing &lt;a href=&#34;https://lists.torproject.org/pipermail/network-health/2021-January/000661.html&#34;&gt;stability issues&lt;/a&gt; with respect to our v3 onion services and consensus building. We are actively working on resolving those issues as soon as possible.&lt;/p&gt;
</description>
        </item>
      
        <item>
          <title>[Resolved] Email delivery problems to Google</title>
          <link>https://status.torproject.org/issues/2021-01-25-gmail-bounce/</link>
          <pubDate>Mon, 25 Jan 2021 21:40:00 +0000</pubDate>
          <guid>https://status.torproject.org/issues/2021-01-25-gmail-bounce/</guid>
          <category>2021-01-25 21:40:00</category>
          <description>&lt;p&gt;Google mail servers are currently bouncing some emails coming from
&lt;code&gt;@torproject.org&lt;/code&gt;, as sent from our main mail server (&lt;code&gt;eugeni&lt;/code&gt;) and
third-party servers. This includes lists but may not include all our
mail servers (the donation platform, for example, seems to still
work).&lt;/p&gt;
&lt;p&gt;We are still investigating the problem, followup in &lt;a href=&#34;https://gitlab.torproject.org/tpo/tpa/team/-/issues/40149&#34;&gt;issue 40149&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Update, 22:16UTC&lt;/strong&gt;: it looks like only &lt;em&gt;some&lt;/em&gt; emails are being
bounced, especially a particular email from a particular sysadmin
which made him jump the gun and post this disruption notice. We might
actually be in our normal &amp;ldquo;somewhat disrupted&amp;rdquo; delivery
situation. &lt;a href=&#34;https://en.wikipedia.org/wiki/Buddy_Holly_(song)&#34;&gt;Stay tuned for more happy days&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Update, 2021-01-27 16:51UTC&lt;/strong&gt;: it turns out this was a false alarm,
and concerns only a single Google group that is refusing
emails. Emails are being delivered to Google fine.&lt;/p&gt;
</description>
        </item>
      
    </channel>
  </rss>

