Announcements
The Spotify Stars Program: Celebrating Values Week!

Help Wizard

Step 1

NEXT STEP

Segmentation fault (core dumped) is back

Solved!

Segmentation fault (core dumped) is back

Plan

Premium

Country
US

 

Operating System

Ubuntu 19.04

 

My Question or Issue

 

Today I received a new update to the spotify linux client

 

spotify-client:amd64 (1:1.1.0.237.g378f6f25-11, 1:1.1.5.153.gf614956d-16)

 

 

After which, spotify would crash immediately after starting up. Starting the client from a terminal yielded the following error

 

Segmentation fault (core dumped)

 

 

After troubleshotting for a while, I noticed the problem stopped happening when I deleted my local config directory

 

rm -rf ~/.config/spotify

 

 

Clearly this isn't a solution so I dug in a little, spelunking through the output from running 

spotify --show-console

One line stood out from the rest

13:00:33.032 W [collection_local_bans_file.cpp:51] bans: could not read local bans file: /home/<me>/.config/spotify/Users/<my account>-user/local_bans.pb

(obvious private stuff redacted)

 

So I checked and I don't have that file. But I do have this file

/home/<me>/.config/spotify/Users/<my account>-user/local-files.bnk 

So I tried deleting it to see what happened and started the spotify client back up and it worked but only from the terminal.  However, after a few more tries, it only worked part of the time 😕 So I am not sure what to make of that anymore. 

 

I would be happy to send my logs to someone in engineering if you would like.

 

EDIT

It is totally sporadic. Without fiddling with anything and starting the client works about 1 out of 3 times. 

Reply

Accepted Solutions
Marked as solution

You are the man.

 

Can confirm that standard Spotify (.deb not snap) now works and starts as intended.

 

No need to install i386 packages actually.

And the versions from the previous post fails (fix broken install WON'T work) because they are just broken links to the FTP; these are the correct versions that allow apt to continue working properly.

 

curl http://http.us.debian.org/debian/pool/main/libt/libtasn1-6/libtasn1-6_4.14-2_amd64.deb -o ~/Downloads/libtasn1-6_4.14-2_amd64.deb

curl http://http.us.debian.org/debian/pool/main/c/curl/libcurl3-gnutls_7.64.0-4_amd64.deb -o ~/Downloads/libcurl3-gnutls_7.64.0-4_amd64.deb

curl http://ftp.us.debian.org/debian/pool/main/g/gnutls28/libgnutls30_3.6.9-4_amd64.deb -o ~/Downloads/libgnutls30_3.6.9-4_amd64.deb

View solution in original post

32 Replies

Im also getting the crash. I ran the same command and piped the output to a file. Uploading the file here. Would just post it but the message cant exceed 20k chars.

Same thing here. Tried to post a reply about 6 times yesterday but gave up in the end. 

So now it seems to be letting me reply - I'm running Spotify version 1.1.5.153.gf614956d on Debian Buster.

 

It's totally sporadic, the process gets killed by SIGSEGV immediately after a call to 35.186.224.53 when it dies. It seems to go from working to dieing sporadically, in all cases I can see from the logs it connects fone, pulls deltas from playlists etc then gets to [gaia_login_state_observer.cpp:42] received new_connection_id then

 

09:13:09.681 D [gaia_manager.cpp:409            ] GAIA: GaiaManager::stateTransition, kServiceStatusStopped->kServiceStatusRunning
09:13:09.682 D [gaia_manager.cpp:1146           ] GAIA: TIMING(2075049165) GaiaManager::sendHelloHelper
09:13:09.699 D [gaia_manager.cpp:1146           ] GAIA: TIMING(2075049182) GaiaManager::sendHelloHelper
09:13:09.721 I [request_accounting.cpp:416      ] High request latency: https://spclient.wg.spotify.com/metadata took 175 ms
09:13:09.777 I [request_accounting.cpp:416      ] High request latency: https://spclient.wg.spotify.com/ads took 377 ms
09:13:09.777 I [request_accounting.cpp:416      ] High request latency: https://spclient.wg.spotify.com/ads took 377 ms
09:13:09.779 I [request_accounting.cpp:416      ] High request latency: https://spclient.wg.spotify.com/ads took 379 ms
Segmentation fault

Not sure if this isn't a race condition somewhere around connection. It's too random to be something on the machine and it connects and pulls the playlist deltas etc without fail every time it starts regarless of whether it segfaults or not. 

base64 encoded my reply to get past this ridiculous forum filtering 

 

 

U28gbm93IGl0IHNlZW1zIHRvIGJlIGxldHRpbmcgbWUgcmVwbHkgLSBJJ20gcnVubmluZ8KgU3Bv
dGlmeSB2ZXJzaW9uIDEuMS41LjE1My5nZjYxNDk1NmQgb24gRGViaWFuIC4KCgoKSXQncyB0b3Rh
bGx5IHNwb3JhZGljLCB0aGUgcHJvY2VzcyBnZXRzIGtpbGxlZCBieSBTSUdTRUdWIGltbWVkaWF0
ZWx5IGFmdGVyIGEgY2FsbCB0b8KgMzUuMTg2LjIyNC41MyB3aGVuIGl0IGRpZXMuIEl0IHNlZW1z
IHRvIGdvIGZyb20gd29ya2luZyB0byBkaWVpbmcgc3BvcmFkaWNhbGx5LCBpbiBhbGwgY2FzZXMg
SSBjYW4gc2VlIGZyb20gdGhlIGxvZ3MgaXQgY29ubmVjdHMgZm9uZSwgcHVsbHMgZGVsdGFzIGZy
b20gcGxheWxpc3RzIGV0YyB0aGVuIGdldHMgdG/CoFtnYWlhX2xvZ2luX3N0YXRlX29ic2VydmVy
LmNwcDo0Ml0gcmVjZWl2ZWQgbmV3X2Nvbm5lY3Rpb25faWQgdGhlbgoKCgowOToxMzowOS42ODEg
RCBbZ2FpYV9tYW5hZ2VyLmNwcDo0MDkgICAgICAgICAgICBdIEdBSUE6IEdhaWFNYW5hZ2VyOjpz
dGF0ZVRyYW5zaXRpb24sIGtTZXJ2aWNlU3RhdHVzU3RvcHBlZC0+a1NlcnZpY2VTdGF0dXNSdW5u
aW5nCjA5OjEzOjA5LjY4MiBEIFtnYWlhX21hbmFnZXIuY3BwOjExNDYgICAgICAgICAgIF0gR0FJ
QTogVElNSU5HKDIwNzUwNDkxNjUpIEdhaWFNYW5hZ2VyOjpzZW5kSGVsbG9IZWxwZXIKMDk6MTM6
MDkuNjk5IEQgW2dhaWFfbWFuYWdlci5jcHA6MTE0NiAgICAgICAgICAgXSBHQUlBOiBUSU1JTkco
MjA3NTA0OTE4MikgR2FpYU1hbmFnZXI6OnNlbmRIZWxsb0hlbHBlcgowOToxMzowOS43MjEgSSBb
cmVxdWVzdF9hY2NvdW50aW5nLmNwcDo0MTYgICAgICBdIEhpZ2ggcmVxdWVzdCBsYXRlbmN5OiBo
dHRwczovL3NwY2xpZW50LndnLnNwb3RpZnkuY29tL21ldGFkYXRhIHRvb2sgMTc1IG1zCjA5OjEz
OjA5Ljc3NyBJIFtyZXF1ZXN0X2FjY291bnRpbmcuY3BwOjQxNiAgICAgIF0gSGlnaCByZXF1ZXN0
IGxhdGVuY3k6IGh0dHBzOi8vc3BjbGllbnQud2cuc3BvdGlmeS5jb20vYWRzIHRvb2sgMzc3IG1z
CjA5OjEzOjA5Ljc3NyBJIFtyZXF1ZXN0X2FjY291bnRpbmcuY3BwOjQxNiAgICAgIF0gSGlnaCBy
ZXF1ZXN0IGxhdGVuY3k6IGh0dHBzOi8vc3BjbGllbnQud2cuc3BvdGlmeS5jb20vYWRzIHRvb2sg
Mzc3IG1zCjA5OjEzOjA5Ljc3OSBJIFtyZXF1ZXN0X2FjY291bnRpbmcuY3BwOjQxNiAgICAgIF0g
SGlnaCByZXF1ZXN0IGxhdGVuY3k6IGh0dHBzOi8vc3BjbGllbnQud2cuc3BvdGlmeS5jb20vYWRz
IHRvb2sgMzc5IG1zClNlZ21lbnRhdGlvbiBmYXVsdApOb3Qgc3VyZSBpZiB0aGlzIGlzbid0IGEg
cmFjZSBjb25kaXRpb24gc29tZXdoZXJlIGFyb3VuZCBjb25uZWN0aW9uLiBJdCdzIHRvbyByYW5k
b20gdG8gYmUgc29tZXRoaW5nIG9uIHRoZSBtYWNoaW5lIGFuZCBpdCBjb25uZWN0cyBhbmQgcHVs
bHMgdGhlIHBsYXlsaXN0IGRlbHRhcyBldGMgd2l0aG91dCBmYWlsIGV2ZXJ5IHRpbWUgaXQgc3Rh
cnRzIHJlZ2FybGVzcyBvZiB3aGV0aGVyIGl0IHNlZ2ZhdWx0cyBvciBub3QuwqAK

base64-ed again but this is a log snippet from this morning when the client successfully loads.

 

 

b2sgc28gc25pcHBldCBmcm9tIHdoZW4gdGhlIGNsaWVudCBsb2FkcyBzdWNjZXNzZnVsbHkgdGhpcyBtb3JuaW5nLi4uCgoKMDk6MzQ6MDEuMjg1IEUgW3B1YnN1Yl9zdWJzY3JpcHRpb24uY3BwOjU5MSAgICAgXSBhcTogTFdTIGdvdCBMV1NfQ0FMTEJBQ0tfQ0xJRU5UX1dSSVRFQUJMRQowOTozNDowMS4yODUgRSBbcHVic3ViX3N1YnNjcmlwdGlvbi5jcHA6NTkxICAgICBdIGFxOiBMV1MgZ290IExXU19DQUxMQkFDS19DTElFTlRfV1JJVEVBQkxFCjA5OjM0OjAxLjI5MCBFIFtnYWlhX2xvZ2luX3N0YXRlX29ic2VydmVyLmNwcDo0Ml0gcmVjZWl2ZWQgbmV3X2Nvbm5lY3Rpb25faWQgPDxSRURBQ1RFRD4+IC0gc3RhcnRpbmcKCjA5OjM0OjAxLjI5MCBEIFtnYWlhX21hbmFnZXIuY3BwOjQwOSAgICAgICAgICAgIF0gR0FJQTogR2FpYU1hbmFnZXI6OnN0YXRlVHJhbnNpdGlvbiwga1NlcnZpY2VTdGF0dXNTdG9wcGVkLT5rU2VydmljZVN0YXR1c1J1bm5pbmcKMDk6MzQ6MDEuMjkwIEQgW2dhaWFfbWFuYWdlci5jcHA6MTE0NiAgICAgICAgICAgXSBHQUlBOiBUSU1JTkcoMjA3NjMwMDc3NCkgR2FpYU1hbmFnZXI6OnNlbmRIZWxsb0hlbHBlcgowOTozNDowMS4zMDIgRSBbcHVic3ViX3N1YnNjcmlwdGlvbi5jcHA6NTkxICAgICBdIGFxOiBMV1MgZ290IExXU19DQUxMQkFDS19DTElFTlRfUkVDRUlWRV9QT05HCjA5OjM0OjAxLjMzMyBEIFtnYWlhX21hbmFnZXIuY3BwOjExNDYgICAgICAgICAgIF0gR0FJQTogVElNSU5HKDIwNzYzMDA4MTYpIEdhaWFNYW5hZ2VyOjpzZW5kSGVsbG9IZWxwZXIKMDk6MzQ6MDEuMzM0IEkgW3JlcXVlc3RfYWNjb3VudGluZy5jcHA6NDE2ICAgICAgXSBIaWdoIHJlcXVlc3QgbGF0ZW5jeTogaHR0cHM6Ly9zcGNsaWVudC53Zy5zcG90aWZ5LmNvbS9hZHMgdG9vayAyNjMgbXMKMDk6MzQ6MDEuMzM0IEkgW3JlcXVlc3RfYWNjb3VudGluZy5jcHA6NDE2ICAgICAgXSBIaWdoIHJlcXVlc3QgbGF0ZW5jeTogaHR0cHM6Ly9zcGNsaWVudC53Zy5zcG90aWZ5LmNvbS9hZHMgdG9vayAyNjMgbXMKMDk6MzQ6MDEuMzM1IEkgW3JlcXVlc3RfYWNjb3VudGluZy5jcHA6NDE2ICAgICAgXSBIaWdoIHJlcXVlc3QgbGF0ZW5jeTogaHR0cHM6Ly9zcGNsaWVudC53Zy5zcG90aWZ5LmNvbS9hZHMgdG9vayAyNjQgbXMKMDk6MzQ6MDEuMzQwIEQgW2dhaWFfbWFuYWdlci5jcHA6MTE0NiAgICAgICAgICAgXSBHQUlBOiBUSU1JTkcoMjA3NjMwMDgyMykgR2FpYU1hbmFnZXI6OnNlbmRIZWxsb0hlbHBlcgoKV09VTEQgTk9STUFMTFkgU0VHRkFVTFQgSEVSRSBXSEVOIEZBSUxJTkcKCgowOTozNDowMS42NDcgSSBbcmVxdWVzdF9hY2NvdW50aW5nLmNwcDo0MTYgICAgICBdIEhpZ2ggcmVxdWVzdCBsYXRlbmN5OiBobTovL3ZhbmlsbGEvdjEgdG9vayAyMDkgbXMKMDk6MzQ6MDIuNzExIEkgW3N0YXJ0dXBfdGltZV9sb2dnZXIuY3BwOjQzNCAgICAgXSB1c2FibGVfc3RhdGU6IHByb2Nlc3Nfc3RhcnQ6MCxhcHBfaW5pdDo1MCxjb3JlX2luaXQ6NzksemxpbmtfbG9hZF9zdGFydDo4Nyxjb3JlX2luaXRfZG9uZToxMDEsaW5pdGlhbF9hcF9jb25uZWN0aW9uOjEwOCxpbml0aWFsX2FwX2Nvbm5lY3Rpb25fZG9uZTo3ODMsaW5pdGlhbF92aWV3X2xvYWRfc3RhcnQ6MTQ1MSx1c2FibGVfc3RhdGU6Mjc0NQowOTozNDowMi43MzcgNSBbcGxheWxpc3RfYmVfcGw0X2NvbnRleHQuY3BwOjk5ICBdIFtzcG90aWZ5OnVzZXI6c3BvdGlmeTpwbGF5bGlzdDpSRURBQ1RFRFJFREFDVEVEUkVEQUNUXSBDcmVhdGluZyBjb250ZXh0CjA5OjM0OjAyLjczNyA1IFtwbGF5bGlzdF9iYWNrZW5kLmNwcDoxNjEgICAgICAgIF0gUGxheWxpc3RCYWNrZW5kTWFuYWdlcjo6cmVzeW5jRGVsYXllZFBsYXlsaXN0Tm93KHNwb3RpZnk6dXNlcjpzcG90aWZ5OnBsYXlsaXN0OjM3UkVEQUNURURSRURBQ1RFRFJFREEpCjA5OjM0OjAyLjczOCA1IFtwbGF5bGlzdF9iZV9wbDRfY29udGV4dC5jcHA6OTkgIF0gW3Nwb3RpZnk6dXNlcjpzcG90aWZ5OnBsYXlsaXN0OlJFREFDVEVEUkVEQUNURURSRURBQ1RdIENyZWF0aW5nIGNvbnRleHQKMDk6MzQ6MDIuNzM4IDUgW3BsYXlsaXN0X2JhY2tlbmQuY3BwOjE2MSAgICAgICAgXSBQbGF5bGlzdEJhY2tlbmRNYW5hZ2VyOjpyZXN5bmNEZWxheWVkUGxheWxpc3ROb3coc3BvdGlmeTp1c2VyOnNwb3RpZnk6cGxheWxpc3Q6UkVEQUNURURSRURBQ1RFRFJFREFDVCkKMDk6MzQ6MDIuNzM5IDUgW3BsYXlsaXN0X2JlX3BsNF9jb250ZXh0LmNwcDo5OSAgXSBbc3BvdGlmeTp1c2VyOnNwb3RpZnk6cGxheWxpc3Q6UkVEQUNURURSRURBQ1RFRFJFREFDVF0gQ3JlYXRpbmcgY29udGV4dAowOTozNDowMi43MzkgNSBbcGxheWxpc3RfYmFja2VuZC5jcHA6MTYxICAgICAgICBdIFBsYXlsaXN0QmFja2VuZE1hbmFnZXI6OnJlc3luY0RlbGF5ZWRQbGF5bGlzdE5vdyhzcG90aWZ5OnVzZXI6c3BvdGlmeTpwbGF5bGlzdDpSRURBQ1RFRFJFREFDVEVEUkVEQUNURQoKCkNvbnRpbnVlcyB0byBsb2FkIHByb3Blcmx5IGZyb20gaGVyZS4K

also the client seems to be linked against libcef.so which isn't a dependency of the debian package (isn't available on my machine)...

 

$ ldd /usr/bin/spotify | grep 'not found'
libcef.so => not found

I may never stop laughing...

 


@thereal_rjf wrote:

base64 encoded my reply to get past this ridiculous forum filtering 

 

sorry - got distracted by work, lol....

 

So libsef.so obviously is on the system at /usr/share/spotify/libcef.so so can be ignored - it's just not in the path of my shell.

 

Turned on core dumping, from what I can gather the backtrace at a segfault is sat with libcurl which ties in to what I am seeing, i.e. if you're not connected to the Internet / going through an ssl proxy which isn't working properly - it starts everytime...

 

(gdb) bt
#0  0x00007fabd03ab7a1 in  () at /lib/x86_64-linux-gnu/libcurl-gnutls.so.4
#1  0x00007fabd038d847 in  () at /lib/x86_64-linux-gnu/libcurl-gnutls.so.4
#2  0x00007fabd038da26 in  () at /lib/x86_64-linux-gnu/libcurl-gnutls.so.4
#3  0x00007fabd03c88f2 in  () at /lib/x86_64-linux-gnu/libcurl-gnutls.so.4
#4  0x00007fabd039446b in  () at /lib/x86_64-linux-gnu/libcurl-gnutls.so.4
#5  0x00007fabd03983fc in  () at /lib/x86_64-linux-gnu/libcurl-gnutls.so.4
#6  0x00007fabd03a9aef in  () at /lib/x86_64-linux-gnu/libcurl-gnutls.so.4
#7  0x00007fabd03aaf91 in curl_multi_perform ()
    at /lib/x86_64-linux-gnu/libcurl-gnutls.so.4
#8  0x0000000003d092f4 in  ()
#9  0x00000000041ddd8c in  ()
#10 0x00007fabc77a1fa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
#11 0x00007fabc6b9382f in epoll_wait
    (epfd=1920976640, events=0x1, maxevents=1920964832, timeout=0)
    at ../sysdeps/unix/sysv/linux/epoll_wait.c:30
#12 0x0000000000000000 in  ()

$ ltrace -ttt -T -f -s4096 -l libcurl-gnutls.so.4 spotify

 

snip
[pid 27058] 1561132340.883868 spotify->curl_easy_setopt(0x7fb55c7fece0, 60, 0, 0) = 0 <0
.001778>
[pid 27064] 1561132340.885808 <... curl_multi_wait resumed> ) = 0 <0.102711>
[pid 27064] 1561132340.885985 spotify->curl_multi_add_handle(0x7fb544000b50, 0x7fb55c7fe
ce0, 0x7fb55c011ed0, 0) = 0 <0.002662>
[pid 27064] 1561132340.888752 spotify->curl_multi_perform(0x7fb544000b50, 0x7fb561ff94c8
, 0x7fb55c011fa0, 0 <unfinished ...>
[pid 27064] 1561132340.892539 libcurl-gnutls.so.4->curl_url_cleanup(0, 0x7fb5b74dcb60, 0
x7fb54497db58, 0x7fb54497db38) = 0 <0.001924>
[pid 27064] 1561132340.894534 libcurl-gnutls.so.4->curl_url(0, 0x7fb5b74dcb60, 0x7fb5449
7db58, 0x7fb54497db38) = 0x7fb5448973e0 <0.001493>
[pid 27064] 1561132340.896078 libcurl-gnutls.so.4->curl_url_set(0x7fb5448973e0, 0, 0x7fb
53872db20, 520 <unfinished ...>
[pid 27064] 1561132340.898519 libcurl-gnutls.so.4->curl_url(115, 0, 8, 512) = 0x7fb54430
9360 <0.001710>
[pid 27064] 1561132340.900326 <... curl_url_set resumed> ) = 0 <0.004225>
[pid 27064] 1561132340.900364 libcurl-gnutls.so.4->curl_url_get(0x7fb5448973e0, 1, 0x7fb
55c8003d0, 0) = 0 <0.001549>
[pid 27064] 1561132340.902030 libcurl-gnutls.so.4->curl_url_get(0x7fb5448973e0, 2, 0x7fb
55c8003e8, 64) = 11 <0.002745>
[pid 27064] 1561132340.905055 libcurl-gnutls.so.4->curl_url_get(0x7fb5448973e0, 3, 0x7fb
55c8003f0, 64) = 12 <0.002562>
[pid 27064] 1561132340.908067 libcurl-gnutls.so.4->curl_url_get(0x7fb5448973e0, 4, 0x7fb
55c8003f8, 64) = 13 <0.001975>
[pid 27064] 1561132340.910088 libcurl-gnutls.so.4->curl_url_get(0x7fb5448973e0, 5, 0x7fb
55c8003d8, 0) = 0 <0.001550>
[pid 27064] 1561132340.911745 libcurl-gnutls.so.4->curl_url_get(0x7fb5448973e0, 7, 0x7fb
55c800400, 0) = 0 <0.001293>
[pid 27064] 1561132340.913108 libcurl-gnutls.so.4->curl_url_get(0x7fb5448973e0, 6, 0x7fb
55c8003e0, 1 <unfinished ...>
[pid 27064] 1561132340.914309 libcurl-gnutls.so.4->curl_msnprintf(0x7fb561ff91e1, 7, 0x7
fb5b75331e0, 443 <unfinished ...>
[pid 27064] 1561132340.915922 libcurl-gnutls.so.4->curl_mvsnprintf(0x7fb561ff91e1, 7, 0x
7fb5b75331e0, 0x7fb561ff90d0) = 3 <0.001929>
[pid 27064] 1561132340.918227 <... curl_msnprintf resumed> ) = 3 <0.003880>
[pid 27064] 1561132340.918831 <... curl_url_get resumed> ) = 0 <0.005675>
[pid 27064] 1561132340.918885 libcurl-gnutls.so.4->curl_url_get(0x7fb5448973e0, 8, 0x7fb
55c800408, 0) = 0 <0.002921>
[pid 27064] 1561132340.921917 libcurl-gnutls.so.4->curl_getenv(0x7fb5b752c3d8, 0x7fb5441
617e0, 24, 0x7fb544000020) = 0 <0.003185>
[pid 27064] 1561132340.925528 libcurl-gnutls.so.4->curl_getenv(0x7fb5b752c3e1, 0x7fb5441
617e0, 0x7fb5b752c3d8, 24) = 0 <0.001919>
[pid 27064] 1561132340.927525 libcurl-gnutls.so.4->curl_getenv(0x7fb561ff92e0, 0x7fb5441
617e0, 0x7fb5b752c3e1, 1) = 0 <0.001928>
[pid 27064] 1561132340.929534 libcurl-gnutls.so.4->curl_getenv(0x7fb561ff92e0, 0x7fb561f
f92e0, 0xffffff9f, 32) = 0 <0.001743>
[pid 27064] 1561132340.931361 libcurl-gnutls.so.4->curl_getenv(0x7fb5b752c3ea, 0x7fb561f
f92e0, 0x7fb561ff92e0, 32) = 0 <0.001537>
[pid 27064] 1561132340.933036 libcurl-gnutls.so.4->curl_getenv(0x7fb5b752c3f4, 0x7fb561ff92e0, 0x7fb5b752c3ea, 10) = 0 <0.002303>
[pid 27064] 1561132340.935449 libcurl-gnutls.so.4->curl_msnprintf(0x7fb561ff9180, 128, 0x7fb5b7533b0a, 443 <unfinished ...>
[pid 27064] 1561132340.938344 libcurl-gnutls.so.4->curl_mvsnprintf(0x7fb561ff9180, 128,
0x7fb5b7533b0a, 0x7fb561ff90a0) = 26 <0.003683>
[pid 27064] 1561132340.942240 <... curl_msnprintf resumed> ) = 26 <0.006762>
[pid 27064] 1561132340.942519 libcurl-gnutls.so.4->curl_mvsnprintf(0x7fb561ff88e0, 2049, 0x7fb5b7534588, 0x7fb561ff88c8) = 53 <0.001891>
[pid 27064] 1561132340.944521 --- SIGSEGV (Segmentation fault) ---
[pid 27063] 1561132340.944715 +++ killed by SIGSEGV +++
[pid 27097] 1561132340.944730 +++ killed by SIGSEGV +++
[pid 27058] 1561132340.944743 +++ killed by SIGSEGV +++
[pid 27057] 1561132340.944755 +++ killed by SIGSEGV +++
snip

 

segfaults a lot less often when running under ltrace which smells of race condition somewhere. someone with more clue might be of use now 🙂 

actually that seems to be the problem...

 

I've run spotify multiple times and saved the ltrace output, on all good runs with no segment faults the curl_mvsnprintf call has a maximum maxlength size of like 256.

 

on EVERY segfault run it has a 2049 maxlength without fail just before it seg faults, such as

 

[pid 27749] 1561132996.706568 libcurl-gnutls.so.4->curl_mvsnprintf(0x7fd250b808e0, 2049, 0x7fd29ced0588, 0x7fd250b808c8) = 53 <0.002874>

Same here with Ubuntu 19.04. I could not get spotify to run and always got the SIGSEGV until I ran it through ltrace. Here, no SIGSEGV but a back window with no output, even after a 10 minutes wait. So, yes possibly a race condition.

 

To add to the list of ltrace/strace, this is what valgrind says. Looks like the code jumps somewhere it should not.

 

==19000== Memcheck, a memory error detector
==19000== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==19000== Using Valgrind-3.14.0 and LibVEX; rerun with -h for copyright info
==19000== Command: spotify --show-console
==19000==
vex amd64->IR: unhandled instruction bytes: 0x66 0xF 0x3A 0x61 0xC1 0x4 0xF 0x82 0xA1 0xF6
vex amd64->IR: REX=0 REX.W=0 REX.R=0 REX.X=0 REX.B=0
vex amd64->IR: VEX=0 VEX.L=0 VEX.nVVVV=0x0 ESC=0F3A
vex amd64->IR: PFX.66=1 PFX.F2=0 PFX.F3=0
==19000== valgrind: Unrecognised instruction at address 0x41c5ccb.
==19000== at 0x41C5CCB: ??? (in /usr/share/spotify/spotify)
==19000== by 0x22836FC: ??? (in /usr/share/spotify/spotify)
==19000== by 0x41C0DB0: ??? (in /usr/share/spotify/spotify)
==19000== by 0x41C0B39: ??? (in /usr/share/spotify/spotify)
==19000== by 0x38FD489: ??? (in /usr/share/spotify/spotify)
==19000== by 0x38FD3E4: ??? (in /usr/share/spotify/spotify)
==19000== by 0x38FADCF: ??? (in /usr/share/spotify/spotify)
==19000== by 0x38FAB7D: ??? (in /usr/share/spotify/spotify)
==19000== by 0x38FABE7: ??? (in /usr/share/spotify/spotify)
==19000== by 0x421055C: ??? (in /usr/share/spotify/spotify)
==19000== by 0x4673B1F: ??? (in /lib/x86_64-linux-gnu/ld-2.29.so)
==19000== Your program just tried to execute an instruction that Valgrind
==19000== did not recognise. There are two possible reasons for this.
==19000== 1. Your program has a bug and erroneously jumped to a non-code
==19000== location. If you are running Memcheck and you just saw a
==19000== warning about a bad jump, it's probably your program's fault.
==19000== 2. The instruction is legitimate but Valgrind doesn't handle it,
==19000== i.e. it's Valgrind's fault. If you think this is the case or
==19000== you are not sure, please let us know and we'll try to fix it.
==19000== Either way, Valgrind will now raise a SIGILL signal which will
==19000== probably kill your program.

Yeah, looks like it's either a bad pointer being passed in to curl's  buffer location or something, I don't really know what I'm on about so take it with a pinch of salt but my guess is that it's probably a bug with threading - how the app is using curl, by the looks of it.

 

Most of the segfault reports I see on line seem to be due to threads trying operations on the same handles etc. 

 

https://curl.haxx.se/libcurl/c/libcurl-tutorial.html#Multi-threading

 

https://curl.haxx.se/libcurl/c/threadsafe.html

 

Seems that it's always crashing out in the same way when it happens, which is intermittent but affected by ltrace (less likely to segfault if tracing without filtering for the curl lib) so perhaps a bit less racey when running. The maxlength size_t is always topped out at 2049 and there's no such calls of that size to the curl gnutls lib when it runs properly, so probably something being modified by another thread.

 

 

Does seem odd though. Interestingly, there's an SSL cert always read in by Spotify which is exactly 2049 in length. 

 

23867 1561368755.293945 <... SYS_read resumed> , "-----BEGIN CERTIFICATE-----\nMIIFvTCCA6WgAwIBAgIITxvUL1S7L0swDQYJKoZIhvcNAQEFBQAwRzELMAkGA1UE\nBhMCQ0gxFTATBgNVBAoTDFN3a
XNzU2lnbiBBRzEhMB8GA1UEAxMYU3dpc3NTaWdu\nIFNpbHZlciBDQSAtIEcyMB4XDTA2MTAyNTA4MzI0NloXDTM2MTAyNTA4MzI0Nlow\nRzELMAkGA1UEBhMCQ0gxFTATBgNVBAoTDFN3aXNzU2lnbiBBRzEhMB8GA1UEAxM
Y\nU3dpc3NTaWduIFNpbHZlciBDQSAtIEcyMIICIjANBgkqhkiG9w0BAQEFAAOCAg8A\nMIICCgKCAgEAxPGHf9N4Mfc4yfjDmUO8x/e8N+dOcbpLj6VzHVxumK4DV644N0Mv\nFz0fyM5oEMF4rhkDKxD6LHmD9ui5aLlV8gR
Epzn5/ASLHvGiTSf5YXu6t+WiE7br\nYT7QbNHm+/pe7R20nqA1W6GSy/BJkv6FCgU+5tkL4k+73JU3/JHpMjUi0R86TieF\nnbAVlDLaYQ1HTWBCrpJH6INaUFjpiou5XaHc3ZlKHzZnu0jkg7Y360g6rw9njxcH\n6ATK72o
xh9TAtvmUcXtnZLi2kUpCe2UuMGoM9ZDulebyzYLs2aFK7PayS+VFheZt\neJMELpyCbTapxDFkH4aDCyr0NQp4yVXPQbBH6TCfmb5hqAaEuSh6XzjZG6k4sIN/\nc8HDO0gqgg8hm7jMqDXDhBuDsz6+pJVpATqJAHgE2cn0m
RmrVn5bi4Y5FZGkECwJ\nMoBgs5PAKrYYC51+jUnyEEp/+dVGLxmSo5mnJqy7jDzmDrxHB9xzUfFwZC8I+bRH\nHTBsROopN4WSaGa8gzj+ezku01DwH/teYLappvonQfGbGHLy9YR0SslnxFSuSGTf\njNFusB3hB48IHpmcc
elM2KX3RxIfdNFRnobzwqIjQAtz20um53MGj
MGg6cFZrEb6\n5i/4z3GcRm25xBWNOHkDRUjvxF3XCO6HOSKGsg0PWEP3calILv3q1h8CAwEAAaOB\nrDCBqTAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\nF6DNweRBtjpbO8tFnb0cwpj6h
lgwHwYDVR0jBBgwFoAUF6DNweRBtjpbO8tFnb0c\nwpj6hlgwRgYDVR0gBD8wPTA7BglghXQBWQEDAQEwLjAsBggrBgEFBQcCARYgaHR0\ncDovL3JlcG9zaXRvcnkuc3dpc3NzaWduLmNvbS8wDQYJKoZIhvcNAQEFBQADggI
B\nAHPGgeAn0i0P4JUw4ppBf1AsX19iYamGamkYDHRJ1l2E6kFSGG9YrVBWIGrGvShp\nWJHckRE1qTodvBqlYJ7YH39FkWnZfrt4csEGDyrOj4VwYaygzQu4OSlWhDJOhrs9\nxCrZ1x9y7v5RoSJBsXECYxqCsGKrXlcSH9/
L3XWgwF15kIwb4FDm3jH+mHtwX6WQ\n2K34ArZv02DdQEsixT2tOnqfGhpHkXkzuoLcMmkDlm4fS/Bx/uNncqCxv1yL5PqZ\nIseEuRuNI5c/7SXgz2W79WEE790eslpBIlqhn10s6FvJbakMDHiqYMZWjwFaDGi8\naRl5xB9
+lwW/xekkUV7U1UtT7dkjWjYDZaPBA61BMPNGG4WQr2W11bHkFlt4dR2X\nem1ZqSqPe97Dh4kQmUlzeMg9vVE1dCrV8X5pGyq7O70luJpaPXJhkGaH7gzWTdQR\ndAtq/gsD/KNVV4n+SsuuWxcFyPKNIzFTONItaj+CuY0Ia
vdeQXRuwxF+B6wpYJE/\nOMpXEA29MC/HpeZBoNquBYeaoKRlbEwJDIm6uNO5wJOKMPqN5ZprFQFOZ6raYlY+\nhAhm0sQ2fac+EPyI4NSA5QC9qvNOBqN6avlicuMJT+ubDgEj8Z+7fNzcbBGXJbLy\ntGMU0gYqZ4yD9c7qB
9iaah7s5Aq7KkzrCWA5zspi2C5u\n-----END CERTIFICATE-----\n", 4096) = 2049 <0.000026>

 

Which is 

 

Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 5700383053117599563 (0x4f1bd42f54bb2f4b)
        Signature Algorithm: sha1WithRSAEncryption
        Issuer: C = CH, O = SwissSign AG, CN = SwissSign Silver CA - G2
        Validity
            Not Before: Oct 25 08:32:46 2006 GMT
            Not After : Oct 25 08:32:46 2036 GMT
        Subject: C = CH, O = SwissSign AG, CN = SwissSign Silver CA - G2
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                RSA Public-Key: (4096 bit)
...

 

I'm wondering whether somehow this is where the 2049 size_t is comming from, passed into curl_mvsnprintf before it's segfaulting. I've tested it again and again and the biggest size_t passed to curl_mvsnprintf normally is 256 when everything is working properly. 

 

 

When it segfaults there's the reado of that ssl cert, two seconds later there's the curl mvsnprintf with the same size_t and segfault...

 

 

23867 1561368755.293945 <... SYS_read resumed> , "-----BEGIN CERTIFICATE-----\nMIIFvTCCA6WgAwIBAgIITxvUL1S7L0swDQYJKoZIhvcNAQEFBQAwRzELMAkGA1UE\nBhMCQ0gxFTATBgNVBAoTDFN3aXNzU2lnbiBBRzEhMB8GA1UEAxMYU3dpc3NTaWdu\nIFNpbHZlciBDQSAtIEcyMB4XDTA2MTAyNTA4MzI0NloXDTM2MTAyNTA4MzI0Nlow\nRzELMAkGA1UEBhMCQ0gxFTATBgNVBAoTDFN3aXNzU2lnbiBBRzEhMB8GA1UEAxMY\nU3dpc3NTaWduIFNpbHZlciBDQSAtIEcyMIICIjANBgkqhkiG9w0BAQEFAAOCAg8A\nMIICCgKCAgEAxPGHf9N4Mfc4yfjDmUO8x/e8N+dOcbpLj6VzHVxumK4DV644N0Mv\nFz0fyM5oEMF4rhkDKxD6LHmD9ui5aLlV8gREpzn5/ASLHvGiTSf5YXu6t+WiE7br\nYT7QbNHm+/pe7R20nqA1W6GSy/BJkv6FCgU+5tkL4k+73JU3/JHpMjUi0R86TieF\nnbAVlDLaYQ1HTWBCrpJH6INaUFjpiou5XaHc3ZlKHzZnu0jkg7Y360g6rw9njxcH\n6ATK72oxh9TAtvmUcXtnZLi2kUpCe2UuMGoM9ZDulebyzYLs2aFK7PayS+VFheZt\neJMELpyCbTapxDFkH4aDCyr0NQp4yVXPQbBH6TCfmb5hqAaEuSh6XzjZG6k4sIN/\nc8HDO0gqgg8hm7jMqDXDhBuDsz6+pJVpATqJAHgE2cn0mRmrVn5bi4Y5FZGkECwJ\nMoBgs5PAKrYYC51+jUnyEEp/+dVGLxmSo5mnJqy7jDzmDrxHB9xzUfFwZC8I+bRH\nHTBsROopN4WSaGa8gzj+ezku01DwH/teYLappvonQfGbGHLy9YR0SslnxFSuSGTf\njNFusB3hB48IHpmccelM2KX3RxIfdNFRnobzwqIjQAtz20um53MGjMGg6cFZrEb6\n5i/4z3GcRm25xBWNOHkDRUjvxF3XCO6HOSKGsg0PWEP3calILv3q1h8CAwEAAaOB\nrDCBqTAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\nF6DNweRBtjpbO8tFnb0cwpj6hlgwHwYDVR0jBBgwFoAUF6DNweRBtjpbO8tFnb0c\nwpj6hlgwRgYDVR0gBD8wPTA7BglghXQBWQEDAQEwLjAsBggrBgEFBQcCARYgaHR0\ncDovL3JlcG9zaXRvcnkuc3dpc3NzaWduLmNvbS8wDQYJKoZIhvcNAQEFBQADggIB\nAHPGgeAn0i0P4JUw4ppBf1AsX19iYamGamkYDHRJ1l2E6kFSGG9YrVBWIGrGvShp\nWJHckRE1qTodvBqlYJ7YH39FkWnZfrt4csEGDyrOj4VwYaygzQu4OSlWhDJOhrs9\nxCrZ1x9y7v5RoSJBsXECYxqCsGKrXlcSH9/L3XWgwF15kIwb4FDm3jH+mHtwX6WQ\n2K34ArZv02DdQEsixT2tOnqfGhpHkXkzuoLcMmkDlm4fS/Bx/uNncqCxv1yL5PqZ\nIseEuRuNI5c/7SXgz2W79WEE790eslpBIlqhn10s6FvJbakMDHiqYMZWjwFaDGi8\naRl5xB9+lwW/xekkUV7U1UtT7dkjWjYDZaPBA61BMPNGG4WQr2W11bHkFlt4dR2X\nem1ZqSqPe97Dh4kQmUlzeMg9vVE1dCrV8X5pGyq7O70luJpaPXJhkGaH7gzWTdQR\ndAtq/gsD/KNVV4n+SsuuWxcFyPKNIzFTONItaj+CuY0IavdeQXRuwxF+B6wpYJE/\nOMpXEA29MC/HpeZBoNquBYeaoKRlbEwJDIm6uNO5wJOKMPqN5ZprFQFOZ6raYlY+\nhAhm0sQ2fac+EPyI4NSA5QC9qvNOBqN6avlicuMJT+ubDgEj8Z+7fNzcbBGXJbLy\ntGMU0gYqZ4yD9c7qB9iaah7s5Aq7KkzrCWA5zspi2C5u\n-----END CERTIFICATE-----\n", 4096) = 2049 <0.000026>
23867 1561368758.455309 libcurl-gnutls.so.4->curl_mvsnprintf(0x7fc157ffc8e0, 2049, 0x7fc1c10db588, 0x7fc157ffc8c8 <unfinished ...>

 

Looking at the "good" trace, there seems to be a pattern of a 7, 128 and a 256 maxlength which repeats for a while...

 

24146 1561369233.774199 libcurl-gnutls.so.4->curl_mvsnprintf(0x7f111dff91e1, 7, 0x7f1172fcb1e0,
0x7f111dff90d0 <unfinished ...>
24146 1561369233.776327 <... curl_mvsnprintf resumed> ) = 3 <0.002121>
24146 1561369233.796293 libcurl-gnutls.so.4->curl_mvsnprintf(0x7f111dff9180, 128, 0x7f1172fcbb0a, 0x7f111dff90a0 <unfinished ...>
24146 1561369233.797648 <... curl_mvsnprintf resumed> ) = 26 <0.001349>
24146 1561369234.085892 libcurl-gnutls.so.4->curl_mvsnprintf(0x7f111dff9260, 256, 0x7f1172fc6be8, 0x7f111dff9170 <unfinished ...>
24146 1561369234.087576 <... curl_mvsnprintf resumed> ) = 57 <0.001677>
24146 1561369234.145964 libcurl-gnutls.so.4->curl_mvsnprintf(0x7f111dff91e1, 7, 0x7f1172fcb1e0,
0x7f111dff90d0 <unfinished ...>
24146 1561369234.147276 <... curl_mvsnprintf resumed> ) = 3 <0.001305>
24146 1561369234.161595 libcurl-gnutls.so.4->curl_mvsnprintf(0x7f111dff9180, 128, 0x7f1172fcbb0a, 0x7f111dff90a0 <unfinished ...>
24146 1561369234.162919 <... curl_mvsnprintf resumed> ) = 26 <0.001318>
24146 1561369234.464201 libcurl-gnutls.so.4->curl_mvsnprintf(0x7f111dff9260, 256, 0x7f1172fc6be8, 0x7f111dff9170 <unfinished ...>
24146 1561369234.465540 <... curl_mvsnprintf resumed> ) = 57 <0.001332>
24146 1561369235.416065 libcurl-gnutls.so.4->curl_mvsnprintf(0x7f111dff91e1, 7, 0x7f1172fcb1e0,
0x7f111dff90d0 <unfinished ...>
24146 1561369235.417466 <... curl_mvsnprintf resumed> ) = 3 <0.001388>
24146 1561369235.437715 libcurl-gnutls.so.4->curl_mvsnprintf(0x7f111dff9180, 128, 0x7f1172fcbb0a, 0x7f111dff90a0 <unfinished ...>
24146 1561369235.446180 <... curl_mvsnprintf resumed> ) = 26 <0.008463>
24146 1561369235.847103 libcurl-gnutls.so.4->curl_mvsnprintf(0x7f111dff9260, 256, 0x7f1172fc6be8, 0x7f111dff9170 <unfinished ...>
24146 1561369235.849220 <... curl_mvsnprintf resumed> ) = 57 <0.002108>
24146 1561369238.366226 libcurl-gnutls.so.4->curl_mvsnprintf(0x7f111dff91e1, 7, 0x7f1172fcb1e0,
0x7f111dff90d0 <unfinished ...>
24146 1561369238.367640 <... curl_mvsnprintf resumed> ) = 3 <0.001404>
24146 1561369238.385993 libcurl-gnutls.so.4->curl_mvsnprintf(0x7f111dff9180, 128, 0x7f1172fcbb0a, 0x7f111dff90a0 <unfinished ...>
24146 1561369238.388487 <... curl_mvsnprintf resumed> ) = 26 <0.002487>
24146 1561369238.748475 libcurl-gnutls.so.4->curl_mvsnprintf(0x7f111dff9260, 256, 0x7f1172fc6be8, 0x7f111dff9170 <unfinished ...>
24146 1561369238.750093 <... curl_mvsnprintf resumed> ) = 57 <0.001612>

 

but on a segfault one, this jumps from 128 to 2049....

 

 

23867 1561368758.042046 libcurl-gnutls.so.4->curl_mvsnprintf(0x7fc157ffd1e1, 7, 0x7fc1c10da1e0,
0x7fc157ffd0d0 <unfinished ...>
23867 1561368758.043383 <... curl_mvsnprintf resumed> )    = 3 <0.001331>
23867 1561368758.062192 libcurl-gnutls.so.4->curl_mvsnprintf(0x7fc157ffd180, 128, 0x7fc1c10dab0a, 0x7fc157ffd0a0 <unfinished ...>
23867 1561368758.065569 <... curl_mvsnprintf resumed> )    = 26 <0.003370>
23867 1561368758.364087 libcurl-gnutls.so.4->curl_mvsnprintf(0x7fc157ffd260, 256, 0x7fc1c10d5be8, 0x7fc157ffd170 <unfinished ...>
23867 1561368758.365942 <... curl_mvsnprintf resumed> )    = 57 <0.001848>
23867 1561368758.435456 libcurl-gnutls.so.4->curl_mvsnprintf(0x7fc157ffd1e1, 7, 0x7fc1c10da1e0,
0x7fc157ffd0d0 <unfinished ...>
23867 1561368758.438399 <... curl_mvsnprintf resumed> )    = 3 <0.002934>
23867 1561368758.452583 libcurl-gnutls.so.4->curl_mvsnprintf(0x7fc157ffd180, 128, 0x7fc1c10dab0a, 0x7fc157ffd0a0 <unfinished ...>
23867 1561368758.454919 <... curl_mvsnprintf resumed> )    = 26 <0.002331>
23867 1561368758.455309 libcurl-gnutls.so.4->curl_mvsnprintf(0x7fc157ffc8e0, 2049, 0x7fc1c10db588, 0x7fc157ffc8c8 <unfinished ...>

 

 

 

Interestingly, if I get a core dump from it when it segfaults and run spotify through gdb with the core I can see what was at the address of the output buffer for curl_mvsnprintf... i.e. 

 

29666 1561378833.635000 libcurl-gnutls.so.4->curl_mvsnprintf(0x7f07eaffa8e0, 2049, 0x7f083f6e7588, 0x7f07eaffa8c8)
                                    = 53 <0.002275>

which shows...

 

(gdb) x/1s 0x7f07eaffa8e0
0x7f07eaffa8e0: "17 bytes stray data read before trying h2 connection\n"

which can be seen in the curl source here...

 

https://github.com/curl/curl/blob/master/lib/http2.c

 

in the http2_connisdead function...

 

 

 

/*
 * The server may send us data at any point (e.g. PING frames). Therefore,
 * we cannot assume that an HTTP/2 socket is dead just because it is readable.
 *
 * Instead, if it is readable, run Curl_connalive() to peek at the socket
 * and distinguish between closed and data.
 */
static bool http2_connisdead(struct connectdata *conn)
{
  int sval;
  bool dead = TRUE;

  if(conn->bits.close)
    return TRUE;

  sval = SOCKET_READABLE(conn->sock[FIRSTSOCKET], 0);
  if(sval == 0) {
    /* timeout */
    dead = FALSE;
  }
  else if(sval & CURL_CSELECT_ERR) {
    /* socket is in an error state */
    dead = TRUE;
  }
  else if(sval & CURL_CSELECT_IN) {
    /* readable with no error. could still be closed */
    dead = !Curl_connalive(conn);
    if(!dead) {
      /* This happens before we've sent off a request and the connection is
         not in use by any other transfer, there shouldn't be any data here,
         only "protocol frames" */
      CURLcode result;
      struct http_conn *httpc = &conn->proto.httpc;
      ssize_t nread = -1;
      if(httpc->recv_underlying)
        /* if called "too early", this pointer isn't setup yet! */
        nread = ((Curl_recv *)httpc->recv_underlying)(
          conn, FIRSTSOCKET, httpc->inbuf, H2_BUFSIZE, &result);
      if(nread != -1) {
        infof(conn->data,
              "%d bytes stray data read before trying h2 connection\n",
              (int)nread);
        httpc->nread_inbuf = 0;
        httpc->inbuflen = nread;
        (void)h2_process_pending_input(conn, httpc, &result);
      }
      else
        /* the read failed so let's say this is dead anyway */
        dead = TRUE;
    }
  }

  return dead;
}

 

 

I compiled gnutls and curl (compiled against gnutls) from latest source and ran my version of spotify. All fixed. Looks like the problem in curl handling of multi events... which rather embarasingly is the problem pointed out by the chap in the other thread (Ubuntu problems / segfaults) about the lib curl problems 🙂 Sorry chaps for wasting more of your time. Apollogies for blaming the spotify app here. Cheers.

this seems to be a work around

 

$ until spotify; do echo try again; done
[0100/000000.820699:FATAL:memory.cc(22)] Out of memory. size=4194304
Segmentation fault (core dumped)
try again
Segmentation fault (core dumped)
try again
Segmentation fault (core dumped)
try again
[0100/000000.589403:FATAL:memory.cc(22)] Out of memory. size=4194304
Segmentation fault (core dumped)
try again
Segmentation fault (core dumped)
try again

Please turn your head away from your computer before throwing up, I am not responsible for broken keyboards and/or laptops

20 minutes to get it running.

 

Woah.

Snap version runs well (compatible libraries are probably inside the snap)

Totally disagree.

 

Snap version has no media keys working AND suffers from segfault just like the .deb one.

 

...at least, on my machine running an up-to-date 19.04 Ubuntu.

Also running up-to-date Ubuntu 19.04. The snap version hasn't crashed once. But it does take longer to spawn.

Suggested posts