UtterAccess.com
X   Site Message
(Message will auto close in 2 seconds)

Welcome to UtterAccess! Please ( Login   or   Register )

Custom Search
 
   Reply to this topicStart new topic
> Sharepoint List Limit, Azure & Peformance, Any Versions    
 
   
blueman2
post Mar 27 2020, 08:36 AM
Post#1



Posts: 247
Joined: 15-November 14



Hope everyone is well. My intention during this downtime in business was to move our sharepoint backend to Azure. I had been thinking about this for a few years as some of my lists were growing towards the 5000 record limit I had read about and a few years ago (1 or 2), I was experiencing some issues with performance.

The first thing I did was to go to Azure to see the cost after the 12 month free trial. I must have been looking at something else when I looked at this a year or so ago as now, the cost when the trial expires seems to be MUCH more expensive than I had originally thought. This translates to about $365/month for the minimum service. At anytime and certainly in these times, that is way too expensive and especially in comparison to sharepoint which is a fraction of that per month.

Attached File  Image_78.jpg ( 200.9K )Number of downloads: 9


Also, it seems like performance problems I had in the last two years are not showing themselves anymore, even though I've now exceeded the 5000 record limit on some lists.

I'm also reading, if I understand correctly, that the limit only has to do with updating massive amounts to tables and not actual day to day routine usage, forms, adding records, queries, reports (which is most of our needs).

I also now am finding out about creating flows etc which i can direct to sharepoint tables and see the results in the access desktop db without moving to a premium flow (now Power Automate) so there's some seamlessness now that I'd rather not give up, aside from the additional cost. I'm also wondering if the current version of SharePoint has fixed some of the performance limitations which were talked about a few years ago?

I would like to get this decision off my plate and either focus on moving the database backend or move on to other things. There may be other options out there less costly than Azure, but I'm wondering if I should take the attitude: "If it Aint Broke....".

If anyone has any thoughts on this or previous or similar experiences, please chime in.





Go to the top of the page
 
RobKoelmans
post Mar 27 2020, 09:15 AM
Post#2



Posts: 498
Joined: 25-November 14
From: Groningen, Netherlands


Hi Blueman,
What really works well is having a SharePoint Workflow call a WebService (may be Flow) and have the WebService store the URL and properties to a SQL-database. We have versions that creates folders and subfolders automatically but - if your folder structure doesn't change by user activity - you don't need all that.

Btw why don't you use SharePoint Online? Very cheap and well integrated with Flow. Have Flow store the metadata in CDS. There's samples about using PowerApps as your PDF-viewer. Very simple and very affordable.
Kind regards
Rob.
This post has been edited by RobKoelmans: Mar 27 2020, 09:19 AM
Go to the top of the page
 
blueman2
post Mar 28 2020, 09:20 AM
Post#3



Posts: 247
Joined: 15-November 14



I am using sharepoint right now.

QUOTE
My intention during this downtime in business was to move our sharepoint backend to Azure.


My question is whether or not we should continue to use it even though our list sizes are approaching the 5000 record (supposed) limitation and in a few cases has passed it. As I said, we are currently having no issues but wanted to make a move in advance of problems, however, I'm having my doubts about it from a monetary and ease of use standpoint and I wanted to see what others were experiencing with large sharepoint lists.


Go to the top of the page
 
GroverParkGeorge
post Mar 28 2020, 09:29 AM
Post#4


UA Admin
Posts: 37,263
Joined: 20-June 02
From: Newcastle, WA


I haven't used SharePoint lists as linked tables for an Access Relational Database Application for quite a while, but based on prior experience, I would say that 5,000 records per table might be low-middle range. It seems like performance issues might appear in the upper 20,000 records per table range. Those are estimates and can vary widely depending on concurrent users and activity levels.

I also have to question your numbers on the cost of Azure SQL databases. They seem to be on the high side to me. How did you arrive at your estimates?

--------------------
My Real Name Is George. Grover Park Consulting is where I did business for 20 years.
How to Ask a Good Question
Beginning SQL Server
Go to the top of the page
 
blueman2
post Mar 28 2020, 09:57 AM
Post#5



Posts: 247
Joined: 15-November 14



George,

I'm very glad you asked that because I was wondering the same thing. I thought I had looked a year or so ago and that I had seen a much less expensive cost. But I believe that I was only looking at storage, not server usage which I was pointed to by someone from Azure who I was on the phone with the other day. I will say that this person did not seem to have a good understanding and for someone with my limited background, reading through their pages and pages of pricing descriptions is mind boggling. So in addition to the storage costs, which were minimal, here's what I saw for a SQL Managed database, using their pricing calculator from this page: https://azure.microsoft.com/en-us/pricing/d...atabase/single/. I plugged in the least expensive option. Of course this pricing was calculated for it being on 24/7 but even if it were not, it is still a huge increase from the cost of using sharepoint.

Attached File  price1.png ( 138.79K )Number of downloads: 4


I also did the same for a Mysql database listed on one of their pages which I was told might be an option and it was a little less expensive (about $250/month), but way beyond our range, especially in these times.

Thanks
Go to the top of the page
 
GroverParkGeorge
post Mar 28 2020, 11:43 AM
Post#6


UA Admin
Posts: 37,263
Joined: 20-June 02
From: Newcastle, WA


Look into the difference between vCore and DTU pricing.

Although it does look like vCore might have some advantages, for smaller databases, it might be more expensive than purchasing by DTU. It's hard to know what your usage would be. Guessing from the numbers of record you're reporting (up to 5,000 in a table), I'm not sure which way is "better".
This post has been edited by GroverParkGeorge: Mar 28 2020, 11:44 AM

--------------------
My Real Name Is George. Grover Park Consulting is where I did business for 20 years.
How to Ask a Good Question
Beginning SQL Server
Go to the top of the page
 
blueman2
post Mar 28 2020, 03:04 PM
Post#7



Posts: 247
Joined: 15-November 14



Thanks for finding that,

That does look like another option. I have no idea what our requirements are from a technical standpoint. I would imagine they're minimal. We have about 15 gb stored on sharepoint in lists and our front end is about 35,000 kb. Usually one person is using it at a time. At most, two and the operations being done are not moving huge chunks of data. I tried reading the article but have no idea what most of those terms are that are being referenced or how to figure out a DTU.
QUOTE
i.e They can all be targeted for production use, but what is different about them is the IO. In basic and standard, the IOPS per DTU is 2.5, whereas in Premium, it is a whopping 48 IOPS per DTU.


I only know what I've explained above. I suppose we can simply try it if we decide to go in that direction as they are offering a free trial. But, from other things I'm reading and from what you had mentioned, is it seems like it may still be a few years before we reach a level that might cause some problems. So maybe for now, we should just stay put.
Go to the top of the page
 
AlbertKallal
post Mar 28 2020, 10:15 PM
Post#8


UtterAccess VIP
Posts: 3,065
Joined: 12-April 07
From: Edmonton, Alberta Canada


I don't see the need for a Azure server plan. Azure is the cloud OS, and then you can have SQL (Azure version), or even some accoutning (Azure) version for that OS.

You want just SQL Azure as a service, and you can go by use only. This whole thing can be confusing as Cell phone plans.

So, no need for Azure, just go with SQL server. I get a starting price of about $15 per month, and next jump is about $30.

Now, in regards to performance? Well, the 5,000 row limit for a low end SharePoint (office 365) plan means that if your table starts out greater then 5,000 rows during the first upload, then AFTER the up-load, then the indexing does not occur (so you are in big trouble). However, if your table rows started out < 5,000 rows, then the indexing can be had. (and index other search columns before your table exceeds 5,000 rows). Now if your table grows beyond 5,000 rows, then you be ok.

As for performance? Well, that issue is more complex then the Azure plans or say a cell phone plan!

In some ways and in some cases using SharePoint tables will run absolute circles around SQL server. The reason of course is that Access can local cache the tables. It is a disconnected model. The one "quirk" issue is that for some REALLY bad reason, Access can't use local indexing on any column EXCEPT for the PK column.

On the other hand, Access can group by, sort and crunch 50,000 rows in WELL under one second of time - and do this darn near no matter what the heck you want of that data. Access local is so stupid fast, that you really don't care. So, assuming that you have the Access cache turned on, then in some cases you find performance FAR better then if you were using SQL server. If your report say has to pull 50,000 rows, then Access/SharePoint will be MUCH faster. (because the data is local). Now, on the other hand, how many reports actually pull 50,000 rows? Typical such reports would have some group by, and "summary" rows of data. So, if the group by is only 1,000 rows, then with SQL server, that is all that comes down the network pipe.

You can certainly go beyond 5,000 rows. It can much depend on the kind of SharePoint plan you have or are using with Access.

If you are using a office 365 plan then the 5,000 row ("initial indexing") limit still applies. I also believe that the column count for tables is around 50-60 rows. Both of these issues can REALLY kill your access application. However, if you are not hitting these limits, then the reverse can be true.

On a full SharePoint server, I turned off the 5,000 row limits, and I was able to push the linked tables to 80,000 rows, and it worked quite well. But then keep in mind I was starting to notice the lack of indexing on the other columns. (access don't use indexing on anything but the PK columns).

So, even if your tables were to hit say 15,000 rows, you still can be far ahead of the game by using SharePoint.

If you going to make hosted SQL server work?
Then you BETTER HAVE great SQL server skills. If you don't, then you in for some learning. And a LOT MORE effort then using SharePoint is required. In other words, to get good performance with SQL server is a LOT MORE work then Access + SharePoint. But then again, if you start hitting walls with SharePoint, there is next to nothing you can do. With SQL server, you can get past these larger walls, but it requires much effort (and good SQL server skills) on your part.

So, like everything?
It depends.
For example, one big downside of SharePoint is that you can't update say 10,000 rows with a server side query. And even if you did deploy some workflows or other code that runs server side to update those 10,000 rows? Well, because Access is a sync table model, then the 10,000 rows updated NOW have to come down the pipe to Access and it is REALLY slow. (no better then if you updated the rows local, and let Access sync up to the server).

With SQL server? I can send one update command, and have it run server side. The only network cost is the one tiny small update query, and that query running server side will cause ZERO network bandwidth.

So, for applications that have to update larger row sets? With SharePoint you are absolute doomed. But, then again, if your application does not have large row counts, or you do NOT need to update large numbers of rows? Well, now then the advantage shifts to SharePoint likely working better!

I should note as a interesting FYI? Then JUST turned off Access web services 3 days ago! Now Access web services is not required for linked SharePoint tables.

And I could/should do a quick test to see if office 365 still supports SharePoint tables without Access web services (it should - but I not tested this).

So, you could likely get a SQL Azure only (not a server) based on a DTU plan for about $30 per month. But, to get decent performance? You WILL HAVE to do MUCH more work then what is required with Access/SharePoint tables.

If what you have fits within the limits of SharePoint, then I would continue with that setup. And in most cases it will run faster then using SQL server, and do so for far less effort and learning on your part.

And, if you start doing some code or something that runs slow, then you likely will not do that anymore. Where people really get whacked here is when they have a local database, and are using Access to update or crunch 100,000 rows. And Access can do that in WELL under 1 second. So, you might have a lot of complex update code. You move that to SharePoint - it not going to work.

So part of this comes down is that Access is so stupid fast that it can often hide the issues of how much data you are (or were) dealing with.
I mean how many posts here do we see that on a local office network after migrating to SQL server, everything runs slower!!

But, say if the developer(s) started out with SQL server? Then anytime they did something that runs slow, they fix the issue or take a different path. It is this "different" path over a good many years that is the key concept I am talking about.

So, if the Application was local (say just single desktop), then such an application might have 10 years worth of code and things that would run very poor after a migration to SQL server or SharePoint.

But, if the application been running on SQL server for years on a local office network, then moving to SQL Azure might not be all that bad. However, EVEN such a setup will require tons of extra work to work well over the internet.

So, an office network running SQL server can work really well. The same application moved to SQL Azure will run some things slow as a turtle. I mean REALLY slow.

So, just keep in mind that your network connection over the internet is about 20-50 or more times slower then your el-cheapo office network. Failure to realize this will open up a big box of surprises here.

I ran about 15 clients on hosted SQL server (2005-2010). It worked, and I had to really be careful with a ODBC connection. It did allow laptops to work anyplace and anytime. However, when Access 2010 came out with the new SharePoint caching system? (and support for related tables).

I moved everyone to SharePoint. It was MUCH MUCH better. And I eliminated about 80% of support calls. (odbc dis-connects were a real bad issue with ODBC to hosted SQL). With SharePoint tables? The application would keep running just fine - even if you lost the internet for a bit!

So, for lower row counts, and not too many related tables? SharePoint tables are a better choice. And even better is you can keep working in most cases without ANY internet connection. it is really ideal.

Too bad project huron by the Access team was not kept. It was demoed at PDC 2009. This was a full sync off/line and on/line sync system for Access to work with SQL server. That would have been a slam dunk for you right now. But, with Access 2010 web services, and SharePoint tables? Well, that was the road that Access was taking at that point in time.

Now? Man oh man, its time to dig up that huron project and add it back into Access. A full sync model with SQL server (including SQL Azure) would be the PERFECT solution for cloud SQL server and access.

R
Albert









Go to the top of the page
 
blueman2
post Mar 29 2020, 09:50 AM
Post#9



Posts: 247
Joined: 15-November 14



Albert, thanks so much for your very in depth answer. It clarifies everything and points me in the right direction. Our lists/tables started at or close to zero so we will not have that "initial indexing" limitation. And special thanks for clarifying that. That plus the cost, speed and somewhat seamless relationship with other Office 365 Apps are telling me to stay put with SharePoint for now. I hope you and everyone contributing to UtterAccess stay well through this crisis.

Go to the top of the page
 
RobKoelmans
post Apr 5 2020, 09:27 AM
Post#10



Posts: 498
Joined: 25-November 14
From: Groningen, Netherlands


Well - as said - I like my documents in SharePoint and my structured information (also regarding documents) in my database. We just finished a totally stupid SharePoint Workflow that stores the OnCreate of a document nicely in SQL-Server including a link to the document. We do the same thing with Windows Server Shared Folders by FCi and Powershell. My British customer is totally happy with it. His users mail a document to the correct SharePoint folder and they see it right away in their SQL-Server i.c. my customer's Access Web App.

We have some triggers that also work on delete, rename etc. but most customers never change something they archived.
Rob
Go to the top of the page
 


Custom Search


RSSSearch   Top   Lo-Fi    29th May 2020 - 11:23 AM