Bitbucket not reachable while importing SVN-Projects

Hello there. We have a selfhosted Bitbucket-Server in Version 5.14.0 (docker). When importing some more or larger Projects from SVN, i get a timeout displayed and the bitbucket server is not responding (504 Gateway Timeout) anymore. Data is stored on a netapp fileshare. While the server is unreachable, i cant even cd in any directory on the fileshare (which is mounted in the docker container), it seems to be completly blocked, it doesent even respond. If the Project is small, after some time, the server starts responding again and the import is completed. But with that behaviour we cant use SVN-Mirror in a productive-environment. When looking in the logs for that repo conversion i find that exception:

2019-03-12 14:38:45,307 sync - Updating latest fetched revision for svn-remote "svn" to r62062
2019-03-12 14:38:45,425 sync - Running periodical Git garbage collector.
2019-03-12 14:38:45,429 sync - Executing [git, -c, gc.autoDetach=0, gc]; environmentVariables={};workingDirectory=/var/atlassian/application-data/bitbucket/shared/data/repositories/62
2019-03-12 14:44:01,737 sync - fatal: fsync error on 'objects/pack/tmp_pack_YV2zHo': I/O error
error: failed to run repack

2019-03-12 14:44:01,740 sync - fatal: fsync error on 'objects/pack/tmp_pack_YV2zHo': I/O error
error: failed to run repack
 com.a.a.a.b.j: fatal: fsync error on 'objects/pack/tmp_pack_YV2zHo': I/O error
error: failed to run repack

        at com.a.a.a.a.b.a(SourceFile:70)
        at com.a.a.a.a.b.runGitCommand(SourceFile:56)
        at org.tmatesoft.translator.m.R$2.a(SourceFile:62)
        at org.tmatesoft.translator.m.P.a(SourceFile:43)
        at com.a.a.a.d.C.b(SourceFile:708)
        at com.a.a.a.d.C.a(SourceFile:421)
        at com.a.a.a.d.C.a(SourceFile:381)
        at com.a.a.a.d.C.a(SourceFile:327)
        at com.a.a.a.d.C.a(SourceFile:162)
        at com.a.a.a.d.O.c(SourceFile:43)
        at com.a.a.a.d.O.b(SourceFile:36)
        at org.tmatesoft.translator.m.ai.a(SourceFile:1453)
        at org.tmatesoft.translator.m.ai.c(SourceFile:996)
        at org.tmatesoft.translator.m.ai.a(SourceFile:1019)
        at org.tmatesoft.translator.m.ai.b(SourceFile:1077)
        at org.tmatesoft.translator.m.d.h.a(SourceFile:242)
        at org.tmatesoft.translator.m.d.h.a(SourceFile:148)
        at org.tmatesoft.subgit.stash.mirror.tasks.SgSyncTask.doSync(SourceFile:93)
        at org.tmatesoft.subgit.stash.mirror.tasks.SgSyncTask.runSyncCommands(SourceFile:82)
        at org.tmatesoft.subgit.stash.mirror.tasks.SgSyncTask.runSecurely(SourceFile:77)
        at org.tmatesoft.subgit.stash.mirror.tasks.SgMirrorTask.lambda$run$0(SourceFile:110)
        at com.atlassian.stash.internal.user.DefaultEscalatedSecurityContext.call(DefaultEscalatedSecurityContext.java:58)
        at org.tmatesoft.subgit.stash.mirror.tasks.SgMirrorTask.run(SourceFile:108)
        at org.tmatesoft.subgit.stash.mirror.tasks.SgMirrorTask.run(SourceFile:22)
        at org.tmatesoft.subgit.stash.mirror.scheduler.SgTaskScheduler$TaskWrapper.runTask(SourceFile:981)
        at org.tmatesoft.subgit.stash.mirror.scheduler.SgTaskScheduler$TaskWrapper.run(SourceFile:943)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

Hello.

Thank you for reporting this issue.

The exception you have mentioned appears to be caused by periodic Git GC invoking during the translation, so I’d suggest trying to switch the GC off as a workaround. It can be done by adding the ‘triggerGitGC’ setting into the mapping configuration:

[svn]
  trunk =…
  branches =…
  tags =…
  …
  triggerGitGC = false

The situation definitely will require a deeper investigation if the Git GC disabling doesn’t help. In this case, please provide us with the add-on logs from the affected repository. The easiest way to collect logs is to Create ZIP feature available on SVN Mirror - Support tab, but it’s only available in add-on v.3.4.6. If you are using an earlier version, the logs should be collected from the filesystem:

  • repository-specific add-on log:

    BITBUCKETHOME/data/repositories/REPO_ID/subgit/logs/svnmirror.log
    
  • global add-on log:

    BITBUCKET_HOME/log/svnmirror.log
    

Also, could you please advise what is the used add-on version and which protocol is used to connect the file share from netapp? Those ‘I/O error’ messages look as if the fileshare became unavailable sometimes, couldn’t there be some network problems?

@ildarhm I will check all that on monday.

@ildarhm, the protocol used for the fileshare is nfs. SVN-Mirror version is 3.4.6. I’ve had a chat with our IT-Maintenance, on the host-server was an old gateway configured, which prevented large-file blocks to be written without some timeouts (some firewall thing, ip-range had moved). Now they configured the correct gateway, so perhaps the problem is solved. But nonetheless, i think, this behaviour is problematic, the bitbucket-server should always respond. So i attached the logs.

@ildarhm, today i imported a larger project, and it seems that, after our IT-Maintenance Department configured the correct gateway, the issue is solved.

Hello
thank you for letting us know.

The logs also don’t contain any other significant error but that you have mentioned, so I think the gateway issue is indeed the root cause.