We’ve all seen it time and time again some misconfiguration with Citrix StoreFront and/or Citrix FAS and you’ll be getting the cannot complete your request message in your screen. Digging in the StoreFront logs and you’ll be seeing the most interesting messages of error kind in which you would think am I a rocket professor?
My story for this certain scenario would be a CVAD deployment integrated with FAS and everything working just fine with some minor bumps like adding your resources to the Windows Authorization Access Group and magic occurs things start to work. See Cannot Complete Your Request Error only occurs to certain users connecting from ADC with Azure MFA over to Storefront (citrix.com) for the fun of it and it’s buddy Common Resolutions to “Cannot Complete Your Request” Error when connecting directly to StoreFront Server (citrix.com)
Ok well this works! Happy customer, happy consultant. And after some time of testing the customer started to migrate existing users to this solution… And stuff didn’t work.. The same error as described in the article would occur and well not so happy customer and consultant now. Troubleshooted this and what the hell new users don’t have this problem.. only existing users! Euhm.. okay.. after some more discussion with the customer it was pointed out that this domain has been alive for a while like NT time while and upgraded to the latest and greatest Windows Server 2019.
This triggered me and after some searching came across Apps and APIs require access – Windows Server | Microsoft Docs which explained the truth! We are missing stuff. I did a compare of the groups “Pre-Windows 2000 Compatible Access” and “Windows Authorization Access Group” of the customer with my own and even a brand new test setup and there the following was missing:
Seems like upgrading a domain time and time again stuff won’t get added. After adding these objects all began to work and even the manual added resources don’t need to be in there like the CVAD servers, users object.
Hope it helps!
On a recent project we were testing some scenario’s for the usage of VMware Blast BEAT through Citrix ADC. For some more information regarding Blast see the following article: VMware Blast Extreme Optimization Guide | VMware
Normally you would see that the Citrix ADC setup is an SSL-BRIDGE vserver with accompanying UDP vserver on the same IP for the ports 443 and 8443 which are the default for BLAST and BEAT usage.
There would be situations that port 8443 cannot be used regarding company compliance or restraints that only port 443 is allowed to be used. In this case you would need to change to an SSL-Offloading setup because we can’t intercept any traffic in a SSL-BRIDGE scenario or apply WebSocket usage in a profile.
We need to create an HTTP profile which enables WebSocket connections and we can use on the new SSL-Offloading vservers
Afterwards bind this profile to the SSL-Offloading vservers
Then we need to change the service group which we use in the UDP vserver to 8443, so the frontend is 443 and the backend is 8443
Lastly the changes on the UAG need to be reflected that the firewall connections would also be 443 for TCP/UDP usage and shared services. See the following articles for more information regarding firewall port requirements and alternate setups in the UAG: Firewall Rules for DMZ-Based Unified Access Gateway Appliances (vmware.com) and Blast TCP and UDP External URL Configuration Options (vmware.com)
With this setup you can service everything for the users on port 443. Do keep in mind that port sharing in this way is an excessive CPU load which can seriously impact the setup. See the following article for some clarification: Unified Access Gateway (UAG) high CPU utilization : HTTP 503 error (78419) (vmware.com)
Hope it helps!
First things first. Citrix ADC at this time isn’t supporting VMware ESX 7.0.1 according to the following article: Support matrix and usage guidelines (citrix.com)
This is something that obviously will get supported in due time. But for the people who are running it just as I am in the lab you would see issues like the ADC instances would lose connectivity or will not load the appropriate network drivers at boot. This is because of the VMXNET3 interface which is causing issues.
Temporary workarounds include just reboot the dam thing until it works… or flip the network interface to E1000 and you have a solid booting/working appliance, well not entirely because the GUI seems to slow down over time to almost not working. I decided to flip it back to VMXNET3 and just do the reboot loop until it works.
Keep in mind that you need to copy/paste your MAC-address before removing the interface and readding them to keep your license etc. nice and working.
Hope it helps!
ESX 7.0 update 1 build 16850804 is supported, anything above is not at this time. I’ve updated my environment to ESX 7.0 update 2 and the same issues are still present.
On a recent customer project there was the need to migrate off of VDA TLS encryption and migrate the connections from StoreFront to Citrix Gateway.
The customer previously had StoreFront direct connections and used the VDA TLS encryption setup to provide a TLS encrypted session to the desktop or applications.
The VDA TLS encryption setup was too much engineering labor for the day 2 day operations and therefore they asked for a alternate solution but still provide the client>desktop as an TLS encrypted session.
Here we have two options, the first is to use Citrix Gateway and StoreFront as authentication but this introduces the users with a new logon screen and then delegates the credentials with json to StoreFront.
The second is forcing the connections from StoreFront through the Citrix Gateway by the means of optimal gateway routing, and we don’t have any user experience changes because the logon point is still StoreFront.
Option two was chosen and after a quick and simple deployment a seamless migration with optimal gateway routing is in place.
First all the preparation in place is creating a new Citrix Gateway DNS record and a new StoreFront load-balancer IP into the current setup will be migrated, afterwards configure the CVAD wizard on the NetScaler for a simple Citrix Gateway deployment and unbind any authentication policies because these will not be used. Afterwards configure all the necessary settings for a standard Citrix Gateway deployment and propagate these changes across the cluster. When all this is done edit the web.config file of the store that got configured under the primary StoreFront servers IIS inetpub directory and search for: optimalGatewayForFarmsCollection and make sure there is an entry with optimalGatewayForFarms enabledOnDirectAccess=”true” and save the file. Propagate the changes and after that migrate the old DNS entry to the new StoreFront ip. You will see that after logon the desktop brokering is force through Citrix Gateway.
The following reference articles where used for configuration and testing:
How to Force Connections Through NetScaler Gateway Using Optimal Gateway Feature of StoreFront (citrix.com)
How to Configure Authentication at StoreFront using NetScaler Gateway – NetScaler Configuration (citrix.com)
FAQ: Configuring Authentication at StoreFront using NetScaler Gateway (citrix.com)
SSL configuration on VDA (citrix.com)
On a recent Citrix FAS deployment I’ve encountered the following error: “Request not supported” when logging in to a published application or desktop.
Article https://support.citrix.com/article/CTX218941 explains that re-enrollment of the domain controller authentication template or another custom template for Kerberos usage should resolve the error.
A little bit of a background on the environment, an already working Microsoft ADCS environment was in play and in use for other services. From a design/security perspective it was designed that two dedicated Microsoft ADCS servers would be used and two Citrix FAS servers connecting these new servers. The setup was working as expected but only above error would keep coming when trying to access an application or desktop.
We tried re-enrolling the domain controller authentication certificate and this didn’t do the trick, then we decided to let the Domain Controllers get the certificate from the new dedicated Microsoft ADCS servers for Citrix FAS and this did do the trick but with a side effect the chain is changed and other services would be negatively impacted so a rollback was needed.
With this information a Microsoft support case was created and ultimately they confirmed that what is mentioned in the Citrix support article should do the trick. Ok we got confirmation and yes it indeed does work when using the new ADCS servers but the issue of the original ADCS environment was still a mystery.
So next up we decided to repoint the Citrix FAS servers to the existing Microsoft ADCS server to root out any chain or other issues that might be in play. The result was exactly the same and a not supported request as the end result.
Digging deeper in the Microsoft ADCS environment it was after checking the “NTAuthCertificates” store that the existing server wasn’t there and the new servers were. This explained the smartcard logon not working when using the existing environment because an requirement for smartcard logon is that the “NTAuthCertificates” store has the issuing certificate authority propagated. After adding the certificate and waiting for replication and a reboot everything was working as expected, also when moving to the new Microsoft ADCS environment for certificate issuing.
See the following screenshot of the Enterprise PKI snap in MMC in which you can check and/or add the missing certificate:
See the following articles for extra information:
Very happy to share my first presentation on Virtual Expo with Erik Bakker, please click the following link for the recording and all other recordings as well.
Recently I got contacted by a customer who had problems performing an SSO to a newly build desktop environment.
The setup a greenfield resource domain and forest trust from an existing tenant with a two way trust. Basically everything was correct but the logon from the users would always get terminated at the desktop with invalid credentials.
After a short discussion and remote session and the error messages in the logs with an invalid CRL it was clear that was the issue. Troubleshooted the AIA/CRL locations and basically the defaults where still in play, explained that default push in AD isn’t a recommended approach. If any client can’t access the CRL it will give a deny on further actions (and other clients that don’t understand AD or are joined to AD won’t work as well).
Below screenshots depict a default ADCS installation which in turn pushes out the default legacy templates and also the CRL to LDAP which I see much too often at customers:
Resolution for the CRL error was to revoke all the certificates for usage with FAS, change the CRL/AIA location to a routable and reachable HTTP listener instead of LDAP (preferably an HA setup with a load-balancer in front of it) and push out the new CRL. Afterwards logons where using the SSO capabilities.
Hope it helps!
When configuring VMware UAG as an reverse proxy I’ve encountered some issues last year that as far as I could see wasn’t all to well documented. My reference article for the configuration was the following: https://techzone.vmware.com/configuring-web-reverse-proxy-identity-bridging-vmware-unified-access-gateway-vmware-workspace-one-operational-tutorial#985671
Basically when you follow it to the letter in your test deployment and with a test site you will not have a working reverse proxy URL. At the time when I encountered this I’ve logged a GSS support case and in the troubleshooting process it was clear that the proxy pattern set wasn’t working whatsoever, the correct one should be (|/(.*)|) instead of (|/intranet(.*)|)
My understanding was that if you would configure the instance id and configured the proxy pattern accordingly it would work but that wasn’t the case. Only when not referencing it and just passing it through it began to work.
When configuring multiple reverse proxy URL’s be sure to create corresponding proxy host patterns on the instance id’s
See the following screenshots for a working reference when using UAG as a reverse proxy for Exchange 2019 and Citrix StoreFront 1912
Hope it helps!
This was quite a nice one to troubleshoot, turns out there is a new configuration point for per app VPN and iOS devices, at least it was for me.
If you follow the configuration at https://www.citrix.com/blogs/2016/04/19/per-app-vpn-with-xenmobile-and-citrix-vpn/#:~:text=With%20the%20iOS%20per%20app,applications%20installed%20on%20the%20device. you’ll end up with a config that won’t open up a VPN when accessing the browser. Solution for this is to change the default provider type in the policy from App Proxy to Packet tunnel also mentioned here https://docs.citrix.com/en-us/xenmobile/server/policies/vpn-policy.html and explained it means the following:
Provider Type: A provider type indicates whether the provider is a VPN service or proxy service. For VPN service, choose Packet tunnel. For proxy service, choose App proxy.
Hope it helps!
I’ve been playing around with the Citrix ADC IP Reputation feature – https://docs.citrix.com/en-us/citrix-adc/13/reputation/ip-reputation.html in the lab for some time and to be honest it’s such a small but very effective feature which I almost never see active, why is that?
If you’ve gotten a premium licensed ADC appliance it’s a simple right click>enable and you put in the necessary arguments in a responder policy. See the following article for a quick how to video – https://www.youtube.com/watch?v=WedxwiEVuG4 and basically that is it. The requests are going to be filtered on a Webroot service provider for malicious IP database and you can then drop those from ever getting at you network. (and put in a nifty log action so that you can filter as a syslog entry in Citrix ADM
I put a global responder in place with the expression: CLIENT.IP.SRC.IPREP_IS_MALICIOUS and a reset with accompanied log entry: CLIENT.IP.SRC + ” connection was dropped by Responder Action for malicious IP when accessing ” + HTTP.REQ.URL the results were pretty much mind blowing, see the following screenshot:Since the exploit CVE-2019-19781 – Vulnerability in Citrix Application Delivery Controller, Citrix Gateway, and Citrix SD-WAN WANOP appliance – https://support.citrix.com/article/CTX267027 the focus was pretty much around that but with the above rule in place I got more hits on the reputation feature than on the mitigation responder in that time. (And these are probes/attacks we don’t know about!)
So to sum it up this is a very good starting point if something like Application Firewall is a bridge to far but most definitely will improve your security with a simple setup.
Hope it helps.